Child safeguarding AI application CESIUM showcased by CDEI and OECD

Reading Time: 2 minutes


Trilateral Research |

Date: 13 September 2023

In June 2023, the Centre for Data Ethics and Innovation (CDEI) included CESIUM, a state-of-the-art AI solution for child safeguarding, into its Portfolio of AI Assurance techniques.  This has been closely followed by the addition of CESIUM to the Organization for Economic Cooperation and Development’s (OECD) catalogue of tools and metrics for trustworthy AI. 

CESIUM, a revolutionary AI application developed by Trilateral Research and co-designed with Lincolnshire Police, is being showcased by both the CDEI and OECD, based on the rigorous approach to the trustworthy AI methodologies used in its development, implementation and governance. 

The state-of-the-art software revolutionises the current approach to child safeguarding. A validation exercise was conducted by Lincolnshire Police in 2022, with live data demonstrating its potential to identify vulnerable children much earlier. CESIUM enabled 16 vulnerable children to be identified up to 6 months before the existing process.  Further analysis proactively identified three vulnerable children for pre-screening risk assessment who had previously been concealed in the data. 

The validation exercise also indicated an increase in operational capacity of 400% within a Safeguarding Partnership setting, and a decrease in time spent on administration – a reduction from 5 people researching, gathering and analysing data over 5 days, to 1 person analysing all near-real-time data within 20 minutes. 

The CDEI AI assurance portfolio has been developed in partnership with TechUK, and provides a showcase of trustworthy AI assurance techniques for those who are involved in the design, development, deployment or procurement of AI. 

The OECD catalogue of tools and metrics for trustworthy AI provides AI stakeholders with a central repository to find information on best approaches, mechanisms and practices for trustworthy AI. 

The inclusion of CESIUM by CDEI and OECD into their respective catalogues of trustworthy AI techniques and tools reflects the robust responsible AI approach that has been applied to the development and implementation of this software – a credit to the co-design team from Lincolnshire Police and Trilateral Research. 

“Our company’s core foundation is built upon a rigorous commitment to research and innovation, fostered by our team of interdisciplinary experts spanning social science, ethics, law, design and technical sciences. This collective expertise fuels purposeful solution design that exemplifies ethical AI implementation within CESIUM. We look forward to continuing to advance responsible AI innovation and to supporting Safeguarding partnerships across the UK.”  Hayley Watson, Director – Sociotech Innovation, Trilateral Research 

Click here to find out more about Trilateral’s work in responsible, ethical AI. 

Related posts

AI governance is a crucial – and ongoing – practice that can help you deploy and maintain responsible AI solutions.   Here, we answer the…

Let's discuss your career