EU Parliament passes AI Act

Reading Time: 5 minutes

Authors:  

Dr Rachel Finn | Director, Data Protection & Cyber-risk Services / Head of Irish Operations
Dr David Barnard-Wills | Research Innovation Lead
Sara Domingo | Researcher and Data Protection Advisor

Date: 13 March 2024

Over the last few months, European and multi-national organisations have been waiting for the outcome of the vote on the world’s first comprehensive piece of AI legislation. While an early version of the EU’s AI Act was largely agreed in 2022, recent innovations, including Generative AI like Chat-GPT, called the efficacy of some of the text into question. This included questions around governance of Large Language Models (LLMs) and concerns around sufficient support for emerging European AI innovations. Now that the act has passed, organisations can look forward to greater certainty over their obligations and citizens can look forward to better protections when AI systems are used by public or private sector organisations.

Definitions and scope

The final version of the legislation defines an AI system as one which is “machine based”, has some level of “autonomy”, potential “adaptiveness” and generates predictions, recommendations or decisions that influence a physical or virtual environment (Article 3(1)). This means that recommender systems for television shows, predictive tools for financial services and tools that generate content like text and images are all going to fall within the scope of the act.

The Act is extra-territorial in scope, much like the General Data Protection Regulation (GDPR) or Digital Services Act. The scope includes any organisation that is developing, distributing or utilising AI tools in the European Union, or any organisation in any location where the output of the AI tool is used in the European Union. Thus, organisations developing or using AI who have locations within the EU or organisations developing that will be used in the EU must all comply with the provisions of the Act.

A risk-based approach

The EU AI Act takes a risk-based approach to the regulation of AI systems, much like the approach taken in the GDPR. For high-risk systems, it creates separate requirements for developers and providers of systems and those who deploy them. Limited, or low, risk systems will have somewhat light obligations, including in areas like transparency, where they may simply have to indicate that content was AI-generated. High-risk systems will have more stringent obligations. This category includes, but is not limited to, systems relevant to healthcare, education, employment, finance, public services and benefits, migration, administration of justice and critical transport infrastructure as well as those intended to influence the outcome of elections or voter behaviour (Annex III). Citizens will have the right to register complaints about AI systems and receive explanations about their outcomes where those outcomes might impact their rights. This is similar to existing requirements on algorithmic transparency under the GDPR where citizens have a right to information about any automated decisions to which they are subject. Certain users of high-risk systems, for example public entities, may be required to register their use of high-risk systems in an EU-level register.

Fundamental Rights Impact Assessments

Certain users of High-risk AI systems will also be required to conduct mandatory fundamental rights impact assessments (FRIAs) for each system they deploy before the AI system is put into use. This will include systems for the purposes of delivering healthcare, finance, banking, insurance, education, employment, public services and other high-risk areas. While there is no established and agreed process for conducting such assessments, as yet, many organisations like the Fundamental Rights Agency are exploring templates and guidance for FRIAs to establish and disseminate good practice in this area. Furthermore, it appears that existing assessments, like Data Protection Impact Assessments will be allowed to work in conjunction with FRIAs.

Technical documentation

Organisations will have to make available appropriate technical documentation for high-risk AI systems. This will include technical documentation that enables conformity assessment and risk assessment for high-risk systems under the Regulation. The recent changes also include technical documentation requirements for general purpose AI systems (GPAI), including information on the content used to train the system.

Sustainability

Finally, some systems will also be required to report on their environmental impact. Specifically, general purpose AI models that meet certain criteria will be required to provide information on their energy efficiency.

Next steps for the Act

The vote on Wednesday 13 March 2024 was the European Parliament approving the compromise text in line with the committees. This adoption by the Parliament does not yet make it law, as it still needs to go to the Council of Ministers. Given the involvement of the Council in the trialogue, and the unanimous Council vote in February, its approval is likely to be something of a formality. Once the Council adopts it, the Act will enter into force twenty days after publication, with a staggered implementation period over the next two years.  Specifically, some provisions, including Articles pertaining to prohibited practices, will be applicable six months after its publication and others, such as obligations for general purpose models, notifying authorities and notified bodies and the establishment of the European AI Board will need to be implemented one year after publication.

How to prepare

If you are planning to build, purchase or use AI systems in the future it is worth assessing your organisation’s readiness for AI Act compliance as soon as possible. There are two key facts that underpin the urgency of beginning this process.

  • Many popular tools, including those developed by large, international software companies, already contain AI elements. Thus, you may not even realise your organisation is already using AI.
  • AI elements are proliferating in tools used by most organisations for basic functions like training, recruiting, health and wellbeing support, background checks among others. This means that by the time the Act comes into force, many organisations will be using high-risk AI systems, even if they have no plans to do so.

As such, there are things you can do to build readiness for these new requirements now, to ensure that you are well prepared.

  1. Build an inventory of all of the tools you are using internally that already include advanced analytics elements that fall under the definition of “AI”, including those that might fall under the high-risk category. Review this AI inventory regularly so that you catch new features in software products already in use.
  2. Revise your procurement guidance and contract management policies to take account of AI features and functionalities, and flag systems that need to be entered into the AI inventory.
  3. Use privacy-by-design and ethics-by-design mechanisms to ensure that any AI tools you are developing and implementing take the requirements of the legislation into consideration.
  4. Invest in training to build legally-required AI literacy and ensure that stakeholders have the foundational knowledge to assess AI systems in the future.
  5. Keep yourself updated on the legislation, as well as emerging regulatory guidance that will follow the passage of the Act.
  6. Consider appointing an AI Officer to manage all of the above.
  7. Consider the use of compliance tools, like STRIAD:AI Assurance to integrate AI Act compliance into the whole AI lifecycle within your organisation.

While the AI Act is introducing many new requirements, the good news is that most of these requirements are familiar to compliance professionals. They are also aligned with existing legislation, like the GDPR or Information Security requirements. Trilateral Research has been investing in AI Act readiness for more than five years and we already have a team of experts, including legal practitioners, ethicists and data scientists with strong experience providing services in Responsible AI, including conducting algorithmic audits. Furthermore, we offer compliance support tools, like our STRIAD:AI Assurance product to support organisations to manage their compliance programme. For information on how you can build AI Act readiness within your organisation, stay tuned to this newsletter or contact our team for more information.

Related posts

Get the latest insights from Trilateral in our new monthly article, featuring the latest developments from across our innovation and researc…

Let's discuss your career