The AI Pact: Shaping together a Trustworthy AI Future

Reading Time: 3 minutes

Authors:  

Francesca Trevisan | Research Analyst

Date: 9 January 2024

2023 closed with a boom for AI regulation in the European Union. 

On December 8, after six months of negotiations, the Parliament and the Council reached a provisional agreement on the text and rules that will comprise the EU AI Act. This is an important milestone for the AI Act journey, moving the Regulation a step closer to its formal adoption by the EU Parliament and the Council and to its entry into force. 

Alongside the final negotiations on the text of the legislation, the European Commission also launched a call for interest for organisations that want to be involved in the AI Pact. While the AI Act took centre stage, the AI Pact might sound less familiar. This article will outline the specifics of the AI Pact and provide information on the benefits of joining.
 

The AI Act: A Glimpse into the Future 

In 2021, the European Commission introduced the AI Act, a regulatory framework designed to ensure AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values.  

While some of the provisions will take effect shortly after its adoption in 2024, other provisions, especially those related to high-risk AI systems, will come into play after a transitional period. 

To stay ahead of the curve, the European Commission launched the AI Pact, a voluntary commitment from organisations willing to prepare for and implement the AI Act’s requirements before the legal deadlines. 

 


The AI Pact: collaboration for trustworthy AI
 

The AI Pact is a proactive scheme aimed at assisting EU and non-EU organisations in planning ahead for the AI Act compliance and encouraging early adoption of the measures outlined in the AI Act.  

The AI Pact invites organisations, including industry, to demonstrate their commitment to the AI Act’s objectives and voluntarily communicate the steps they are taking to prepare for compliance and ensure that the design, development and use of AI is responsible and trustworthy. 

Organisations involved in the AI Pact will make formal commitments -pledges- to work towards compliance with the upcoming AI Act. These pledges will include specific details about the actions they are currently taking or planning to take to meet the Act’s requirements. 

The actions could, for example, include the steps that organisations are taking to: 

  • Assess the risks posed by AI systems that are being developed or used  
  • Comply with the obligations for high-risk systems, including:  
  • setting up a risk management system,  
  • implementing appropriate data governance practices,  
  • assembling required technical documentation,  
  • allowing for automatic record keeping,  
  • guaranteeing transparency for users and human oversight, and  
  • integrating security by design and by default. 
  • Conduct fundamental rights impact assessments 
  • Create codes of conduct 

These pledges will be collected and made public by the European Commission, which will provide visibility into the organisation’s commitment, increase credibility, and foster trust in the technologies they are developing or using.

Benefits of joining the AI Pact 

The AI Act will bring in a number of new practices for AI developers and users, which is likely to be most organisations by the time the Act comes into force. These new practices are intended to make AI and its use better and safer. The AI Pact allows organisations to stay ahead of the curve, by testing internal measures and communicating their commitment to responsible and trustworthy AI. It also facilitates the required internal transformation by providing a space where organisations can come together, share best practices and guidelines, report to regulators and plan new strategies to comply and thrive.  

Another way to facilitate an internal transformation towards responsible AI use is to implement an all-in-one AI governance tool, such as STRIAD:AI Assurance. This will give you full visibility of your AI systems and their associated risks, with the tools to future-proof your compliance – including for the EU AI ACT.

Trilateral Research is already working with government agencies and regulators to define some of the requirements for AI Act compliance and pass these lessons on to clients. If you want to join the AI Pact, reach out to your team lead or contact us, and we can help you to map and implement the requirements to ensure the use of responsible and trustworthy AI systems across your organisation.  

Related posts

Get the latest insights from Trilateral in our new monthly article, featuring the latest developments from across our innovation and researc…

Let's discuss your career