Agreement reached on EU AI Act text: summary of key changes

Reading Time: 5 minutes

Authors:  

Dr Rachel Finn | Director, Data Protection & Cyber-risk Services / Head of Irish Operations
Sara Domingo | Researcher and Data Protection Advisor

Date: 9 January 2024

Over the last few months, the future of the world’s first comprehensive piece of AI legislation has been hanging in the balance. While an early version of the EU’s AI Act was largely agreed, last minute innovations, including Generative AI like Chat-GPT, called the efficacy of some of the text into question. This included questions around governance of Large Language Models (LLMs) and concerns around sufficient support for emerging European AI innovations. If these issues could not be addressed, the legislation was in danger of being abandoned. However, in a marathon negotiation session in early December, law makers agreed the final text and are preparing to send it for formal voting in the European Parliament and Council. At the time of writing, the final text has not yet been made available; however, some of the key changes have been publicised. While this article lists some of these key changes, it focuses on what the next steps are likely to be and what organisations should be doing now to prepare.   


Background 
 

The initial text of the EU AI Act was proposed in April 2021 and publicly released for stakeholder comment. The intention of the legislation is to provide safeguards for members of the public in the context of the proliferation of AI systems and to support industry and innovators to enable Europe to capture the opportunities made possible by AI innovation. Since then, the text of the legislation has gone through various iterations as part of the EU’s trialogue process as the European Council, Parliament and Commission have each proposed amendments to the text of the legislation. The version of the text agreed on the 9th December represents the agreed, final version that will need to be adopted by the Parliament and Commission, likely in April 2024.  


High risk applications
 

The EU AI Act takes a risk-based approach to the regulation of AI systems, much like the approach taken in the General Data Protection Regulation (GDPR). For high-risk systems, it creates separate requirements for developers and providers of systems and those who deploy them. Limited, or low, risk systems will have somewhat light obligations, including in areas like transparency, where they may simply have to indicate that content was AI-generated. High-risk systems will have more stringent obligations. This category will most likely include, but is not limited to, systems relevant to healthcare, education, employment, finance, public services and benefits, migration, administration of justice and critical transport infrastructure as well as those intended to influence the outcome of elections or voter behaviour1. Citizens will have the right to register complaints about AI systems and receive explanations about their outcomes where those outcomes might impact their rights. This is similar to existing requirements on algorithmic transparency under the GDPR where citizens have a right to obtain information about any automated decisions to which they are subject. Certain users of high-risk systems, for example public entities, may be required to register their use of high-risk systems in an EU-level register. 


Fundamental Rights Impact Assessments
 

Users of High-risk AI systems will also be required to conduct mandatory fundamental rights impact assessments (FRIAs) for each system they deploy before the AI system is put into use. This will include systems for the purposes of delivering healthcare, finance, banking, insurance, education, employment, public services and other high-risk areas. While there is no established and agreed process for conducting such assessments, as yet, many organisations like the Fundamental Rights Agency are exploring templates and guidance for FRIAs to establish and disseminate good practice in this area.   


Technical documentation 
 

Organisations will have to make available appropriate technical documentation for high-risk AI systems. This will include technical documentation that enables conformity assessment and risk assessment for high-risk systems under the Regulation. The recent changes also include technical documentation requirements for foundational models and general purpose AI systems (GPAI), including information on the content used to train the system.   


Sustainability
 

Finally, some systems will also be required to report on their environmental impact. Specifically, general purpose AI models that meet certain, yet to be publicised, criteria will be required to provide information on their energy efficiency.   


Next steps for the Act
 

Now that the decisions about the content of the legislation are finalised, this information needs to be transformed into legal, legislative text. It will also need to be translated in to the 23 official EU languages to enable wide-spread accessibility, which is expected to happen in February. Final passage of the legislation by both the Parliament and Council could happen as early as April, if there are no further political challenges. Once the Act is passed and published in the Official Journal of the European Union, it will enter into force 24 months later. However, some provisions, including Articles pertaining to prohibited practices, will be applicable six months after its publication and others, such as those covered under the governance chapter and conformity assessment bodies, will need to be implemented one year after publication.2 


How to prepare
 

If you are planning to build, purchase or use AI systems in the future it is worth assessing your organisations readiness for AI Act compliance as soon as possible. There are two key facts that underpin the urgency of beginning this process.  

  • Many popular tools, including those developed by large, international software companies, already contain AI elements. Thus, you may not even realise your organisation is already using AI.  
  • AI elements are proliferating in tools used by most organisations for basic functions like training, recruiting, health and wellbeing support, background checks among others. This means that by the time the Act comes into force, many organisations will be using high-risk AI systems, even if they have no plans to do so. 

As such, there are things you can do to build readiness for these new requirements now, to ensure that you are well prepared.   

  1. Build an inventory of all of the tools you are using internally that already include advanced analytics elements that fall under the definition of “AI”, including those that might fall under the high-risk category. Review this AI Register regularly so that you catch new features in software products already in use.  
  2. Revise your procurement guidance and contract management policies to take account of AI features and functionalities, and flag systems that need to be entered into the AI Register. 
  3. Use privacy-by-design and ethics-by-design mechanisms to ensure that any AI tools you are developing take the requirements of the legislation into consideration. 
  4. Invest in training on human rights and AI ethics to ensure that stakeholders have the foundational knowledge to conduct Fundamental Rights Impact Assessments in the future. 
  5. Keep yourself updated on the legislation, as well as emerging regulatory guidance that will follow the passage of the Act. 
  6. Consider participating in the AI Pact to test your compliance measures early, prior to the Act coming into force. 
  7. Consider appointing an AI Officer to manage all of the above.  

While the AI Act is introducing many new requirements, the good news is that most of these requirements are familiar to compliance professionals. They are also aligned with existing legislation and standards like the GDPR or ISO 27001 on Information Security. Trilateral Research has been investing in AI Act readiness for more than five years and we already have a team of experts, including legal practitioners, ethicists and data scientists with strong experience providing services in Responsible AI, including conducting algorithmic audits. For information on how you can build AI Act readiness within your organisation, stay tuned to this newsletter or contact our team for more information.  

To prepare your organisation for AI Act compliance, you may also consider an all-in-one AI governance platform – such as STRIAD:AI Assurance. It is designed to help organisations scale their AI portfolio with confidence, while complying with new and emerging regulations.

Related posts

Get the latest insights from Trilateral in our new monthly article, featuring the latest developments from across our innovation and researc…

Let's discuss your career