Last month, we wrote about how generative AI could be used in the civil service, encouraging governments and civil servants to familiarise themselves with the benefits and risks of this new technology. The article reflected one of our central beliefs: AI is coming, and bringing with it serious ethical, legal, privacy, and social risks. Responsible AI doesn’t mean refusing to work with the technology or worrying about doomsday scenarios; it means understanding how the technology works, evaluating its impacts and risks, and implementing it alongside concrete safeguards, such as transparency and external oversight.
In late June, the UK government published its own guidance to civil servants on the use of generative AI. The guidelines are permissive and encourage civil servants to familiarise themselves with the technology and how it can be integrated into their work, embodying our own approach by facing the technology head on. But, at the same time, the guidance is sparse on details, especially regarding safe use of the technology. Its main advice includes: be “cautious and circumspect” when using AI; never enter classified or sensitive data into AI models; be mindful of the principles of the GDPR; and be wary of bias and misinformation.
At this month’s ‘What the tech?’ session, we analysed the strengths and shortcomings of this new guidance, and asked: If you worked in the civil service, would this guidance give you the information you need to feel confident using generative AI? We identified three main areas where improvements could be made; training, liability, and appropriate oversight.
Training
The new guidance warns civil servants to be vigilant with respect to the potential harms of AI but seems to provide limited detail about what these harms may be. At this point, we’ve all heard about the common issues with generative AI—it can be biased, lean on outdated sources, or generate incorrect information —but these shortcomings are not always straightforward.
Take, for example, bias. Bias is obvious if an AI model is generating flagrantly racist language. But most of the time, racially biased outputs are more subtle. Tell an AI image generator to draw you a picture of a professional in any field, and it will almost certainly be white, and most likely male. Unless you repeat this task a number of times, you might not realise this is a trend. It becomes clearer still if you prompt AI to depict an employee at a low paying job, which most commonly generates images of darker-skinned people. Other types of AI have assigned black defendants in court with higher risk to reoffend scores, have considered immigrants more likely to defraud the welfare system, and have prioritised giving medical treatment to white people over black patients in greater need of care.
To better manage these risks, civil servants need proper AI education. They should understand how generative AI models are trained, how training data can lead to biased or inaccurate answers, and what these flawed results might look like. This training should be frequently updated and supplemented by guidance tailored to individual departments, and there should be resources civil servants can turn to with specific questions. Such initiatives are in place around the world, including in Singapore and Brazil, which have education programmes for civil servants using AI. In the UK, Trilateral partnered with The Turing Institute to develop Intermediate and Expert courses that can close some of these gaps. Without such training, it’s impossible to provide adequate oversight to AI models.
Liability
The language of the guidance seems to place responsibility for AI on the civil servants using the systems. Who, then, is liable for the fallout if a bad AI output causes harm? Our team recommends that governments articulate a liability policy which makes it explicit which potential consequences of AI usage are the responsibility of civil servants and which fall on the government. This will ensure that civil servants engaging with AI tools have a comprehensive understanding of the risks of engaging with these new technologies and support in using them effectively. Ultimately, in the UK, civil servants are accountable to ministers, who are in turn accountable to Parliament. Even with a new tool like AI, this accountability should be maintained.
Oversight
This new guidance comes alongside intense government scrutiny of the civil service, including talks of job cuts. In this context, some are concerned that generative AI could be used to replace, rather than supplement human employees. In instances when AI tools were allowed to take on decision-making tasks without proper oversight, the results had a range of negative unintended outcomes. Examples of this include the UK’s Department of Work and Pension’s plans to use AI to risk-score benefit claims, and the Netherlands attempts to use AI to identify fraudulent child benefit claims.
We recommend implementing generative AI (and other AI software) as a tool to reduce administrative burdens and the amount of time on routine formulaic work, and ensuring outputs are scrutinised by employees trained to recognise error and bias. These safeguards require sufficient investment; any implementation of AI (whether in the public or private sector) must be resourced appropriately.
Our minimal recommended safeguards
The UK government’s new guidance on the use of generative AI could be the first step towards a more comprehensive policy. As is, it does a good job of introducing the technology to civil servants and may stymie the growing issue of “Shadow AI,” unregulated AI use by employees looking to increase their efficiency which may inadvertently violate data protection policies or security classifications. Concrete, department specific details and training about AI safety are the necessary next steps. We have developed a list of minimal safeguards for government AI usage, including continuous training, bias and equality assessments, and Data Protection Impact Assessments, all of which should be considered when deploying generative AI in the civil service.
As governments adopt cutting-edge AI technologies, it is imperative that these tools are deployed responsibly and safely, alongside education initiatives for the civil servants who will operate them. At Trilateral, developing responsible AI is what we do. If you’d like to find out more about our services, get in touch.