Accountability should not be blurred in the age of generative AI

Generative artificial intelligence is no longer experimental. Tools that draft reports, screen job candidates, design logos, write code, and produce images are now embedded in workflows of enterprises in the private sector and also in agencies of public administration. Unlike earlier forms of AI that merely analysed data, generative systems create new content and recommendations. That creative capacity forces a pressing question: when AI systems act, who is responsible?

The answer cannot be ‘the machine’.

This is relevant in workforce management. Isn’t generative being used to summarise performance reviews, recommend promotions, optimise staffing, and draft termination documentation? Is the promise here that generative AI systems promise neutrality and efficiency?

But … if a flawed model leads to discriminatory outcomes, reputational damage, or financial loss, responsibility cannot and does not dissolve into the code. Executives who deploy such systems, boards that authorise them, and developers who design them all sit somewhere along the chain of accountability. Where and to what degree is the question.

The law has long understood that delegation does not eliminate liability. We do not excuse a defective product because it was manufactured by an automated process. Nor should we excuse decisions made through AI simply because they were statistically generated. Human oversight is not a mere preference — it is a structural necessity.

There are also systemic risks. Generative systems can be manipulated, and biased through training data. And can they be quietly altered through adversarial attacks? What if there is a hacked or corrupted AI system that is now doing hiring, procurement, or financial decision-making and that could cause widespread harm before anyone notices? Governance frameworks of AI must therefore require accountability, auditability, transparency of deployment, and clear lines of responsibility within organisations. It is not an acceptable defence that “the AI said so”.

Beneath the technicalities there is often a simpler principle that revolves around functions and functionality. Notice that in complex technical settings law makers at the primary level do not make complex technical rules about technologies, but instead look at particular functions and make rules about those functions. It is part of the approach that technology does not absolve humans of responsibility. We built these systems, we are responsible for the functions. We choose where to deploy them. We benefit from their efficiencies. Accountability must follow those choices.

In this way, considering the functions and functionalities of AI deployment is a good way for approaching governance. Who gave the AI system that function, why, when, and how? What did they take into account in the design, the implementation, and the deployment? What standards, including ethical did they consider? In short, what was their human judgement? This insists that humans retain the reins, whether for blame and liability or for wise decisions, and that the future will not be one of blaming or praising the AI or the algorithms.

https://substack.com/@macropsychic/note/c-213928938

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *