The AI Seoul Summit 2024: Shaping the future of AI through safety, innovation, and inclusivity

Reading Time: 3 minutes


Dr Rachel Finn | Director, Data Protection & Cyber-risk Services / Head of Irish Operations

Date: 23 May 2024

Six months ago, the world’s first AI safety summit brought both national governments and tech companies to the UK. Out of that meeting came the Bletchley Declaration, a first-of-its-kind agreement that sought to address the risks of AI as well as harness its potential.

This week, The AI Seoul Summit 2024, co-hosted by the UK and South Korea, brings together global leaders to continue their work on advancing AI Safety. But what do they hope to achieve?

Three key themes emerged from the event: safety, innovation and inclusivity. In this blog post, we explore these ideas in more depth and evaluate the potential outcomes of this summit.

Prioritising AI Safety

One of the primary outcomes of the AI Seoul Summit was the establishment of the ‘Frontier AI Safety Commitments.’ Sixteen international firms, including Microsoft, Amazon, and IBM, as well as from China and the United Arab Emirates’ Technology Innovation Institute, pledged to adhere to new AI safety standards, building on the foundations of the Bletchley Park summit. These commitments emphasise the identification, assessment, and management of risks associated with frontier AI models. The agreement also calls for transparency and accountability in the development and deployment of these systems. These commitments mirror the responsible AI principles we employ in our child safeguarding tool CESIUM, which prioritises safety, transparency, and accountability.

Furthermore, the summit saw the formation of an agreement between 10 countries and the EU to establish an international network of publicly-backed AI Safety Institutes, following the UK’s launch of the world’s first such institute last year. This collaboration aims to align research, standards, and testing efforts to ensure a cohesive approach to AI safety.

Encouraging AI Innovation

The AI Seoul Summit recognised the importance of fostering innovation while maintaining safety standards. Governments are increasing investments in AI research and development, particularly in universities and start-ups. For example, the UK’s Department for Science, Innovation and Technology introduced a £7.4 million pilot scheme, the Flexible AI Upskilling Fund, to support AI skills training for small and medium-sized enterprises (SMEs) in the Professional Business Services sector.

Both government and private companies highlighted the importance of prioritising both safety considerations and innovation.  OpenAI and DeepMind, two leading AI companies, shared their practices and commitments to developing safe and capable AI models. OpenAI emphasised their systematic approach to safety, implementing measures at every stage of the model’s life cycle, from pre-training to deployment. DeepMind highlighted their Frontier Safety Framework, their approach to understanding the future risks posed by advanced AI models. Both updates point to ongoing collaboration between governments and private organisations as they try to find a way to balance innovation and safety. This aligns with the perspective shared by our CEO Kush Wadhwa where he demonstrates that responsible AI delivers better insights and operational improvements.

Ensuring AI Inclusivity

The summit stressed the importance of inclusive AI development, ensuring that AI technologies benefit diverse sectors and communities globally. Participants discussed equitable AI frameworks, aiming to create guidelines that promote the widespread benefits of AI. Global partnerships were also established to focus on using AI to tackle pressing issues such as poverty and climate change. Participants highlighted the need for a more precise understanding of inclusivity and set the stage for its in-depth examination.

Based on the discussions held, the notion of inclusivity in AI encompasses several distinct dimensions. Firstly, it entails empowering developing countries to enhance their capabilities in AI design, development, and utilisation. Secondly, it recognises the importance of ensuring that women and minority groups have a more significant role in shaping the world of AI, as demonstrated by the creation of the UN Secretary-General’s High-Level Advisory Body on AI, which is gender-balanced and geographically diverse. Finally, it acknowledges the need for AI to be able to account for and cater to the requirements of marginalised and vulnerable groups. However, doing so is challenging, as it relies on sufficient data from traditionally excluded populations to effectively train AI tools. At Trilateral, our ethics-by-design process advances inclusive outcomes wherever possible, as shown by a recent case study of our CESIUM application.

What Next?

The AI Seoul Summit 2024 marked a crucial point in sustaining the progress needed to actively shape the future of AI. The Summit’s outcomes, including the Frontier AI Safety Commitments, the establishment of an international network of AI Safety Institutes, and the emphasis on inclusive AI development, demonstrate the global commitment to responsible AI advancement.

As the world continues to navigate the challenges and opportunities presented by AI, the collaborative efforts and principles established at the AI Seoul Summit will serve as a foundation for ongoing international cooperation. The summit’s focus on safety, innovation, and inclusivity underscores the importance of developing AI technologies that benefit humanity while mitigating potential risks.

As global leaders work towards a safer, more innovative, and inclusive AI future, compliance teams must also play their part in creating a responsible AI culture within their organisations. Learn how to align your efforts with the principles by reading our latest blog post on creating a responsible AI culture.

Related posts

Let's discuss your career