AI algorithms in the civil service: Managing risks and safeguarding rights

Reading Time: 5 minutes

Authors:  

Amelia Williams |
Trilateral Research |

Date: 3 August 2023

In our monthly ‘What the tech?’ series, our team of subject matter experts, social scientists,  data scientists, engineers, ethicists, and legal experts explore the most recent trends in tech and AI, discussing the social implications and providing practical, actionable solutions. 

AI is everywhere. You might be hearing reports from its evangelists convinced it’s going to save civilization, or doomsayers certain it will bring on the apocalypse. The reality, though, is often far more mundane: AI is creeping into public services to relieve administrative burdens made unbearable by austerity, understaffing, and Covid-era backlogs, raising questions about the knock-on effects of incorporating emerging technologies into administrative services with an outsized impact on human life. 

Last month, the BBC reported on the UK Department of Welfare and Pensions’ (DWP) plans to use AI to risk-score benefits claims. Details are scarce, but initial reports suggest the DWP uses a machine-learning model trained on historical data to flag certain applications as likely fraudulent, and is planning to expand the use of those tools to other areas. Critics say the department has failed to be transparent about its use of AI programs and their impact on privacy and equity.  

Similar programs elsewhere have ended in scandal. In the Netherlands, an AI program developed to identify fraudulent child benefits claims was revealed to classify applicants of immigrant and ethnic origin as high-risk, saddling thousands with debts and forcing the government to resign. In Australia, the “robodebt” scandal was triggered by an overreliance on an automated system which erroneously charged welfare recipients with thousands of dollars in debts, resulting in a public inquiry and a multibillion-dollar payout. Even a low rate of errors in these programs can have massive human costs. Victims of these schemes reported financial stress, mental health impacts, marital breakdowns, and, in a few tragic cases, suicide.  

But the AI revolution is here, and governments under economic, administrative, and political pressure to cut costs and boost efficiency will want to adopt these tools. In response to this, Trilateral’s research analysts, data scientists, and ethicists have reviewed examples of automated fraud detection systems used by governments in Spain, the Netherlands, Australia, and the UK to identify their common failures and a list of necessary safeguards to prevent them in the future. 


The pitfalls
 

Three common challenges emerged in all the case studies we reviewed. In part, these were caused by the limitations in the technology itself. However, these failures were largely driven by the ways humans interacted with the technology, rather than the technology itself, highlighting the need for conversations around responsible AI deployment and usage. 

Transparency 

Across all case studies, governments were tight-lipped about the use of AI technologies. The inner workings of these programs—which automate and threaten to depersonalize services with significant impacts on the wellbeing of applicants, such as welfare and sick leave—were more likely to come to light via journalists or whistleblowers than by government transparency. This opacity, often framed as a way to prevent fraudsters from outsmarting the system, hinders external oversight, exacerbating concerns about fairness and undermining trust. 

Bias 

Bias plagues all AI systems, but becomes especially problematic when they’re deployed by governments obligated to deal fairly with its citizens. Algorithms are trained on datasets which carry the prejudices from the time and place they were collected. Imagine a country whose leadership wanted to dissuade people who drive red cars from getting mortgages. Perhaps they created arbitrary reasons to disqualify them or saddled them with higher interest rates, making them more likely to default on their payments. An AI trained on this country’s data is likely to flag red car drivers as high risk loanees, or disqualify them completely. This is what seems to have occurred in the Netherlands, when the system flagged people of non-Dutch origin as intrinsically higher risk—but because there was no transparency surrounding the system’s operation, the discriminatory decision-making continued unchecked. Without transparent explanations about how the risk of bias is mitigated, there’s no way to monitor whether historical prejudices are being replicated, and whether governments are violating their commitments to equity and non-discrimination.  

Accountability 

If a lack of transparency hinders external oversight, deference to AI tools prevents internal vigilance. In the cases reviewed, technology introduced to fulfil mundane administrative tasks were gradually given more leeway, until human workers simply rubberstamped the program’s decisions. This creeping authority of AI systems, underpinned by the assumption that the AI is always right, prevents critical review of the system’s outputs. This isn’t only a problem when it comes to sophisticated AI systems—these dynamics drove the UK Post Office scandal, in which an IT accounting system incorrectly flagged postmasters across the UK for fraud, leading to false convictions and prison time.  

In the cases we reviewed, this deferential attitude led to an alarming inversion of the principles of justice: if a computer system fingers you for an unpaid debt, it’s up to you to prove your innocence, rather than the government to prove your guilt. In the Post Office scandal, people who weren’t able to prove that the technology had erred were jailed; Australia’s robodebt scandal left people fighting to prove they didn’t owe debts calculated using automated equations which weren’t publicly available. 

In these cases, deference to technology combined with an opacity that prohibits oversight meant these programs, with all their quirks, errors, and proficiencies, set the agenda that humans followed. If we want to get ahead of this technology—to ensure it meets our needs and upholds our values—essential safeguards must be established. 

 

Safeguards 

 After reviewing these case studies, the Trilateral team drafted a short list of minimum safeguards that ought to be implemented ahead of the government deployment of any automated fraud detection software.  

 Appeals Process 

To protect the rights of individuals assessed by these programs, all automated tools should be deployed alongside an appeals process. The process should be accessible, simple to navigate, and have multiple levels, allowing individuals to challenge decisions and proceed to different stages of review if they disagree with outcomes. 

 Continuous Training 

To prevent a deference to and overreliance on these systems, staff should receive continuous training. Civil servants should understand the system’s risks and potential for bias, think critically about its outputs, and oversee its decision-making. The programs should be understood as a tool operated by civil servants, rather than one which replaces them. 

 Bias and Equality Assessments  

To counter bias, deployers of the system should regularly conduct bias and equality impact assessments to ensure the system doesn’t discriminate or violate their obligations to protect equity. Learn more about assessing the ethical impact of technology here. 

 Data Protection Impact Assessments  

To protect the privacy and rights of individuals being assessed by these programs, systems should collect the least amount of data necessary. Deployers should also conduct regular Data Protection Impact Assessments (DPIAs) to evaluate the risks systems pose to individuals and ensure appropriate safeguards are in place. Find out more about how you can include DPIAs in your AI development here.

Governments continue to adopt AI and automated systems at an ever-increasing pace. It is critical that these tools are developed and deployed ethically and responsibly—with individual rights considered and protected. At Trilateral, developing responsible AI is at the heart of what we do. If you’d like to find out more about our research, services and solutions, get in touch 

Related posts