Disputing the Independence and Impartiality of an AI-assisted Judiciary

Reading Time: 3 minutes

Authors:  

Dr Julia Muraszkiewicz
- Head of Programme for Human Trafficking and Human Rights

Date: 24 June 2021

From private technology companies to national courts, artificial intelligence (AI) tools are in the process of inducing an unprecedented change in the public sector – and this development, unless addressed with careful consideration, has the potential to confront the fundamental right to a fair trial. In particular, the requirement for an independent and impartial tribunal.

Trilateral Research evaluates machine learning algorithms using a diverse set of algorithmic transparency techniques to produce solutions that enable ethical human-focused machine-assisted decision-making, as opposed to automated decision-making.

In this blog, we analyse the impacts of using AI in the courtroom to assist judges in their decision-making and the importance of increasing justice professionals’ AI literacy for better and more responsible use of these tools.

Independence vs. Automated Decision-making

The difficulty AI presents to this independence concerns judges’ potential over-reliance on its findings and recommendations, which might thereby present an undue influence on the judiciary – prohibited according to the ECtHR’s case-law.

Previous research has shown that humans are highly susceptible to automation bias, i.e., the disposition to favour and falsely put too much confidence in decisions made by automation, even if contradictory information is presented. Accordingly, although AI is only supposed to assist judges in their decision-making, this bias would contribute to the emergence of de facto AI judges and endanger the independence of the Court.

Another bias with equivalent effects concerns the anchoring effect, which denotes the human tendency to over-rely on an initial or “anchor” value, especially when faced with a value that possesses “an aura of certainty and fact”, such as the recommendations of AI. Human judges therefore might lack the knowledge and confidence needed to challenge the recommendations of the algorithm.

This concern has recently arisen in the case Loomis v. Wisconsin, in which the Wisconsin Supreme Court (2016) maintained that the use of algorithmic risk assessment tools – in this case, COMPAS – conform to the due process rights of the defendant inasmuch as judges were provided with a warning on the limitations of the tool. However, an important drawback of this warning is that “it is silent on the strength of the criticisms of these assessments, it ignores judges’ inability to evaluate risk assessment tools, and it fails to consider the internal and external pressures on judges to use such assessments”.

From what the discipline of psychology has taught us over the last decades, in all likelihood, in the near future judges will not be able to escape the influence of AI in their decision-making, which could seriously limit the independence of the judiciary guaranteed by the right to a fair trial.

Impartiality vs. Bias

The impartiality requirement of the European Convention on Human Rights currently encompasses, inter alia, an objective test, with the decisive factor being whether the suspicion of the accused that the judiciary lacks impartiality is legitimate and can be objectively justified.

This is of great importance as AI assistants might exhibit certain prejudices or bias. Taking racial bias, as an illustration, COMPAS was 77% more likely to classify African-American defendants as high-risk than white defendants while also erroneously judging white defendants to be lower risk. Therefore, assuming that judges fail to correct for these computer biases potentially due to their own biases, introducing AI in criminal proceedings could create a “disproportionate feeling and experience of justice”.

Following the acknowledgement of an AI-assisted judiciary’s independence and impartiality, the EU is advised to ensure that new technologies conform to the established legal protections. For AI in the courtroom could potentially strengthen the already existing social divisions and endanger our fundamental liberties, the European Union’s new draft Regulation has a great number of unprecedented challenges to address. While the impartiality requirement is sufficiently tackled by the so-called Artificial Intelligence Act, the independence element still poses a substantial threat to defendants’ rights before an AI-assisted judiciary.

Recognizing the opportunities AI applications provide in reforming the criminal justice system (i.e. easing the resourcing pressure, increasing the proactivity of victims, and broadening the availability of data and infrastructure), the development of this new technology should not be abandoned altogether. Instead, the focus should be shifted towards increasing justice professionals’ AI literacy as well as the digital literacy of the general population. This increased literacy would improve the public perception of AI-assisted courts as it was shown that people with higher technology literacy are more likely to trust AI outputs. Hence, the draft Regulation should incorporate obligations for Member States to implement educational efforts for the next generation of AI users and legal professionals, thereby decreasing the public’s distrust of judicial independence.

Author: Zoé Gáspár, University of Amsterdam

For more information, please contact our team.

 

Related posts