This week, the UK has played host to the world’s first AI Safety Summit, driven by a critical need to remain ahead of a technology that is developing at an unprecedented pace. With the EU AI Act close to implementation, and other countries racing to keep up, it’s no surprise that the UK government took the initiative to get to the heart of the issue: Ensuring that AI is safe.
What did the UK government hope to achieve at the Summit?
The Summit had 5 main objectives:
- a shared understanding of the risks posed by Frontier AI and the need for action
- a forward process for international collaboration on Frontier AI safety
- measures which individual organisations should take to increase Frontier AI safety
- areas for potential collaboration on AI safety research
- showcase how ensuring the safe development of AI will enable AI to be used for good
On Wednesday, the Bletchley Declaration was published – an unprecedented international agreement on AI safety, and it is clear there is work to be done. At Trilateral, we have monitored the activities over the past couple of weeks on the lead up to the Summit closely, attending meetings and pre-events to ensure responsible and trustworthy AI (RTAI) is a core part of the discussion. We have also been involved in the development of key interventions, ensuring responsibility and trustworthiness are at the core of Frontier AI.
Trilateral: Leading the way in RTAI
On Tuesday 31 October, Innovate UK and Bridge AI published their ‘Report on the Core Principles and Opportunities for Responsible and Trustworthy AI’ (RTAI). The report was developed and delivered by our team of interdisciplinary AI experts and marks a pivotal moment for AI in the UK. As has been recognised in this week’s Bletchley Declaration, AI ‘presents enormous global opportunities’ but ‘also poses significant risks’.
Our seminal report provides a clear framework to allow UK organisations to both capitalise on those opportunities whilst mitigating those risks, with our experts identifying three fundamental principles:
- Ensuring AI is developed sustainably
- Embedding Sociotech principles into AI solutions
- Integrating AI assurance processes into all aspects of AI
Whilst we recognise that each industry and organisation will have its own bespoke requirements from AI, it is critical that these principles form part of the foundation on which it is built. In particular, our ground-breaking Sociotech approach (the integration of social/ethical principles into technology) should be adopted across all industries, and our robust approach to assurance and governance should also be used as a benchmark in AI development. Organisations must be ready to take on the responsibility that comes with the development and adoption of AI, to ensure that the solutions they implement are transparent and explainable, safe and secure, non-maleficent, and prioritise privacy, fairness, justice and accountability.
Over the past few weeks, we have been involved in several key Summit pre-events, and throughout have contributed to and led discussions on the role of RTAI.
The industry leaders’ perspective
On Tuesday 19 October, our CEO Kush Wadhwa attended TechUK and the Department of Science, Innovation and Technology’s pre-Summit event, with AI industry leaders from around the UK. There were three main takeaways:
- There is a need for an agile, collaborative working relationship between industry, civil society and government – it is imperative to build safe AI that inspires trust.
- There is broad agreement on the need for regulation in this space – further discussion must be had on the most appropriate model for the UK.
- There are clear opportunities for innovation, market capture and policy / standards development in AI assurance, sustainable AI and the Sociotechnical development of AI.
At Trilateral, our Sociotech method is fundamental to the development of ethical AI tools. This means integrating legal experts, subject matter experts and ethicists alongside technical experts when we build our tools and help clients to implement them. Another strong focus is the shared responsibility models we build with our clients to ensure transparency, fairness, privacy and non-discrimination within our systems.
Our development of both Sociotech and shared responsibility methods have been integral in our development of the RTAI report above, and now form part of our recommendations of best practice in AI.
The role of AI in child protection
On Monday 30 October, our Head of Programme for Human Rights and Human Trafficking, Dr Julia Muraszkiewicz joined a panel of experts at the AI Safety Summit’s pre-event, along with attendance by Kush Wadhwa, our CEO, looking at how AI is being used to create child sexual abuse imagery, and how it can be used to tackle the issue.
Over the past five years, our work on CESIUM has put Trilateral in a unique position when it comes to child protection. CESIUM is our flagship AI solution, that supports safeguarding partnerships around the UK in the management and analysis of multi-agency data on some of the most vulnerable people in our society – children. Our work has given us insight into both the critical issues faced by all stakeholders, and the knowledge to ensure the development of a ground-breaking AI solution.
On Monday, it was once again clear that AI can be a double-edged sword, with both fantastic potential, but also new ways that our most vulnerable can be exploited. At Trilateral, this duality is not lost on us. From our years of research into AI and future tech, we know just how AI could be used nefariously – but we firmly believe that, with the adoption of RTAI, it can be a powerful force for good. In our recent article on AI in law enforcement, we take a deeper look at the balance of power, innovation and ethics that will be required as AI advances in this area.
What are the next steps in this new Frontier?
The Summit’s programme this week has been focused on Frontier AI – the AI that can outperform current, existing models. On Wednesday, we published an article to take a more detailed look at what Frontier AI is, its risks, and how RTAI can be used to ensure we are preventing harms and reaping the benefits of AI.
At Prime Minister Rishi Sunak’s closing press conference, he confirmed four areas of broad agreement:
- A need for an open and inclusive conversation for a shared understanding of AI
- An understanding that we must keep pace with AI development
- A landmark agreement that AI should be safety tested before release
- The requirement for an ongoing international process to stay ahead of the curve
The Prime Minister also confirmed the commissioning of a ‘State of the Science’ Report to understand the capabilities and risks of AI, and further AI Safety Summits being held next year in France and South Korea.
From a Trilateral perspective, the Summit’s focus on research is positive, and the prospect of an ‘AI Safety Institute’ welcomed. The evaluation of ‘safety’ will become of critical importance, with a need for domain / contextual knowledge and a Sociotechnical approach being taken. With the Bletchley Declaration now in place, and as we await the longer-term outcomes of the Summit, we hope that a clear, actionable roadmap emerges. All stakeholders, in all industries, should have clear guidance on how to ensure robust measures can be put in place to ensure safety-by-design within AI.
At Trilateral, we will continue to advocate for RTAI and integrate our Sociotech, shared responsibility and assurance methods into our suite of software products. More broadly, we will engage with stakeholders, in all industries, to ensure best practice is followed, and we will lead the conversation to make sure these standards do not slip.
In terms of the Summit objectives, we will:
- Lend our voice, knowledge, skills and expertise to enhance the understanding of RTAI
- Engage in conversations that promote international collaboration on AI safety
- Lead the way on ensure the high standards of Frontier AI safety are met
- Leverage our ground-breaking research teams to support governance
- Continue to promote the use of AI for social good
At Trilateral, we are passionate about harnessing the best of technology for social good. As we move forward in this new Frontier, we are sure this can be done safely with responsible, trustworthy AI.