Technology and the law are converging, and where they meet new questions arise about the relative roles of artificial and human agents—and the ethical issues involved in the shift from one to the other. While legal technology has largely focused on the activities of the bar, it challenges us to think about its application to the bench as well. In particular,
Could AI replace human judges?
The idea of AI judges raises important ethical issues around bias and autonomy. AI programs may incorporate the biases of their programmers and the humans they interact with.
But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases.
Appealing to rationality, the counter-argument is that human judges are already biased, and that AI can be used to improve the way we deal with them and reduce our ignorance. Yet suspicions about AI judges remain, and are already enough of a concern to lead the European Union to promulgate a General Data Protection Regulation which becomes effective in 2018. This Regulation contains
“the right not to be subject to a decision based solely on automated processing”.
As the English utilitarian legal theorist Jeremy Bentham once wrote in An Introduction To The Principles of Morals and Legislation, “in principle and in practice, in a right track and in a wrong one, the rarest of all human qualities is consistency.” With the ability to process far more data and variables in the case record than humans could ever do, an AI judge might be able to outstrip a human one in many cases.
Even so, AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment.
In practical terms, if we apply this conclusion to the perspective of American legal theorist Ronald Dworkin, for example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.
The American Judge Richard Posner believes that the immediate use of AI and automation should be restricted to assisting judges in uncovering their own biases and maintaining consistency.
At the heart of these issues is a hugely challenging question: what does it mean to be human in the age of Artificial Intelligence?
Source: Open Democracy