What businesses can learn about AI from the public sector 

Reading Time: 4 minutes
AI lessons from the public sector

Authors:  

Dr Hayley Watson | Director, Sociotech Innovation

Date: 30 May 2024

It would be easy to assume that the private sector is paving the way forward for AI adoption, and that the public sector still has a lot to learn. While 70% of private sector organisations already use, or are planning to use AI, government departments are expected to drive AI adoption plans by June 2024, signalling the government’s drive behind AI use.  

With expensive legacy technology to upgrade, overstretched workforces to train, and intense public scrutiny to navigate, it’s true that the public sector faces barriers to modernising, many of which are felt by other sectors. Yet its contribution to the era of AI adoption shouldn’t be discounted. In fact, many public sector organisations offer a shining example of how to adopt AI tools in the right way. 

Here are three key lessons that the public sector can teach businesses about adopting AI responsibly, based on our experience of developing and deploying AI for government bodies and law enforcement agencies.  

 

1. Create a responsible AI framework before you adopt AI 

All organisations must build trust with their stakeholders, both internally and externally, but the playing field is different for those in the public sector. For them, their primary stakeholder – the one that holds them responsible for their AI use and outcomes – is the public itself.  

That’s why there’s a real appetite among public sector organisations to operationalise AI right the first time, to ensure accountability and establish public confidence and trust.  

As demonstrated by the National Police Chiefs’ Council (NPCC), a key part of this is to begin by establishing a framework for responsible and trustworthy AI use, rather than jumping straight in. With the communities support, the NPCC developed the Covenant for Using Artificial Intelligence (AI) in Policing – a set of principles that outline forces’ commitments to developing and using AI responsibly in policing, and against which its employees and the industry alike must be held accountable. 

This is a practice that all organisations, public or private, should mirror. With an established framework for AI, businesses can make sure they get the best from their AI tools, while protecting their clients, customers, and the public. This framework should reflect their internal values, compliance requirements, and commitments to stakeholders, so that they can establish trust throughout the entire AI adoption process. 

 

2. Develop responsible AI to inform decision making 

In Europe, the passing of the AI Act has shone a light on why it’s crucial to develop responsible AI. At Trilateral Research, interdisciplinary teams do so using ‘sociotech‘, a method for designing AI tools that support data-driven decision making for complex social challenges. Through our projects, many public sector organisations are readily seeing the benefits. The way AI is designed matters, as it impacts the extent to which insights can be meaningful for those who rely on them to drive operations and decision making.  

Take Trilateral’s work with Lincolnshire Police as an example. We designed CESIUM, an AI tool for child safeguarding, with ethics principles as its core, including bias assessments and transparency tools to support decision making. With its design allowing for more accurate insights, CESIUM has been able to increase operational capacity for Lincolnshire Police by 400%.

Alongside increasing capacity and decision-making capabilities, responsibly built AI can support corporate social governance (CSR) efforts – a key focus for the private sector. With the appropriate framework driving their development and use, AI tools can support all manner of positive business practices, whether by understanding supply chains or assessing the impact and gaps in equality and diversity initiatives. 

To learn more about the commercial benefits of responsible AI, you might enjoy CEO Kush Wadhwa’s recent article: Responsible AI doesn’t impede innovation – it supports it. Here’s why.  

 

3. Use end-to-end, ‘by design’ processes 

Developing AI responsibly should also be regarded as an end-to-end process – driven by a strong AI framework. For instance, ethical AI principles, from transparency and fairness (among others), aren’t an ‘amendment’ or risk assessment at the end of the design cycle. They are an ongoing part of the AI lifecycle. This is critical whether organisations are developing AI tools from the ground up, or simply modifying them before deployment.  

The development of CESIUM again presents a great example of this end-to-end process in action. Along with Lincolnshire Police, we embedded the appropriate ethics principles into CESIUM’s development from the very start – from defining the platform’s requirements, to developing training and designing how end users would manage any ethical risks that arose when integrating the system into their operations. By leveraging sociotechnical methods that consider various non-technical considerations, such as workflow, bias, data protection and security into the design, Trilateral was able to design a trusted system that reduced the time and effort needed to analyse safeguarding data for referral purposes, from five people in five days, to one person in 20 minutes. 

For businesses to get the best from their AI tools, they need to live and breathe responsible AI from an end-to-end design. This requires buy-in from across the organisation, but it also requires a digital skills uplift and multi-stakeholder engagement activity. After all, building responsible AI is one thing, but relying on your teams to maintain and scale it is another. By investing in interdisciplinary skills and AI training, organisations can fill the skills gap and bring their teams on their responsible AI journey. 

 

Get the best from your AI tools with Trilateral Research 

Many of the projects we contribute to at Trilateral Research are rooted in the public sector, so we’ve seen first-hand how transformative responsible AI frameworks and sociotech design are. We’re also committed to informing best-practice approaches for the private sector and supporting businesses to realise the many benefits of responsible and trustworthy AI. 

To learn more about sociotech and its value as a design technique, you might enjoy our blog, Responsible technology: the benefits of a sociotech approach. 

You can also read more insights from Trilateral Research by following me on LinkedIn. 

Related posts

Let's discuss your career