Just as every action has its equal and opposite reaction, it seems that every proposed use of generative AI has its downside—usually the first step towards economic collapse, a technological super-elite, or some other catastrophe. But as big tech pushes AI ahead, it serves nobody to stand paralysed at the brink of a slippery slope. Instead, experts need to step forward and lead conversations about where generative AI ought to be applied and where it must be avoided, how it can be implemented safely and ethically, and what guardrails should be in place to ensure it doesn’t run amok.
Inspired by the recent publication of the UK’s guidance for civil servants using generative AI, Trilateral’s monthly ‘What the Tech?’ session considered possible uses of generative AI—that which can produce text, images, and audio—in the public sector. The conversation follows last month’s session on government use of AI algorithms for fraud detection, where we covered some high-profile misuses of the technology and drafted a list of minimum safeguards for governments using these programmes. s.
At Trilateral, we’ve identified a number of government functions that could potentially be improved with the use of generative AI:
Governments manage a lot of tasks and a lot of information. Since the transition to digital services, they’ve done so notoriously poorly, often with outdated, difficult-to-navigate websites. Generative AI could help structure the abundance of information living on these sites and may also be useful for writing code and creating software to improve the delivery of online services.
Perhaps its most promising function in this domain isn’t a cutting-edge innovation, but the improvement of something that’s been around for years: the chatbot. Improved language processing and outputs could make these bots much more useful than they are now; imagine a bot that could walk you through each step of your passport renewal process, locating and simplifying the information you need and tailoring it to your personal circumstances. Gone are the long days citizens spend waiting in government offices, and the hours civil servants spend guiding people through bureaucratic processes.
Civil servants shoulder a massive burden when it comes to producing information in the forms of briefings, public announcements, and speeches. These tasks are often repetitive—certain speeches are reproduced annually with minimal changes—and time-consuming to produce. In many ways, it’s the ideal use case for generative AI, which can take raw information and integrate it into a coherent body of text in minutes, if not seconds. But this use comes with a warning—as we discussed in last month’s article, AI tools used to ease bureaucratic burdens in the civil service have been allowed to operate without sufficient oversight in the past, resulting in serious errors.
Lawmaking and compliance
Laws are complex, and in heavily codified systems they often exist in dozens of iterations dating back decades. For example, if you’re working in the UK government and want to amend fishing rights, you’d be wading into a body of legislation dating to the 16th century. Generative AI could be used as a supporting tool to analyse these laws, to identify gaps and ensure new legislation isn’t redundant or conflicting. Generative AI could also be used to translate laws and policies into plain language, making them accessible to the public. Again, chatbots could be useful here; they could summarise ongoing legal changes or allow citizens to check their compliance with complex systems, such as tax code.
Governments often respond to crises with big picture commitments; promises to end child hunger or eliminate fossil fuels. But how do you start working towards a commitment of that scale? Generative AI tools would be particularly useful for analysing large data sets, and explaining the results of these analyses in plain, accessible language. Advocates argue that the use of generative AI to generate policy insights could also be applied to more routine analytical tasks, such as identifying which transit systems are most cost-effective or identifying opportunities for tailored policy interventions. Generative AI could also be used for streamlining the management of government assets, such as infrastructure, bonds, and land.
The risks of generative AI
Despite these promising use cases, much uncertainty surrounds the implications of using generative AI for certain tasks, such as:
- Policy and decision-making: Whilst generative AI can assist in interpreting data and communicating information, it should not be used to make policy decisions. Generative AI cannot understand the issues involved; this responsibility must rest with people.
- Moderating online communities: Ban an online community engaged in harmful speech, and they’re more than likely to pop up again on a new platform or under a different name. Some experts argue that it’s more effective to infiltrate groups and redirect the conversation towards safer topics, which is where generative AI could come in. But this raises a host of ethical and legal questions, such as the impact on free speech.
- Assisting overburdened care systems: When it comes to seeking mental healthcare, being placed on a lengthy waiting list can exacerbate symptoms and cause significant harm. Unfortunately, that’s frequently the case in the UK. Could generative AI act as a stopgap measure, triaging waitlisted patients and flagging those deemed particularly vulnerable? This raises concerns, particularly about the impacts of treating vulnerable patients with such an impersonal service, and the dangers of relying on AI in overburdened systems which may not have the capacity to provide sufficient oversight. A potential solution is human-led risk assessment, harnessing the analytical capabilities of AI, whilst still leaving ultimate decision making to humans – an example of which is our state-of-the-art CESIUM solution.
Generative AI is likely to become a ‘go to’ tool for many public (and private) sector functions. But these tools must be developed, implemented, and controlled responsibly. At Trilateral, responsible AI is our speciality—if you’d like to know more about our work in this area, get in touch.