EU AI Act: A guide to the first artificial intelligence act

Reading Time: 10 minutes

Authors:  

Dr Rachel Finn | Director, Data Protection & Cyber-risk Services / Head of Irish Operations
Nicole Santiago | Senior Research Analyst
Dr David Barnard-Wills | Research Innovation Lead
Sara Domingo | Researcher and Data Protection Advisor

Date: 22 March 2024

The EU AI Act has now been passed. Here’s an introductory guide to the Act, including what it is, what it means, and how your business can prepare. 

This EU AI Act guide covers: 

What is the EU AI Act? 

 The European Union’s Artificial Intelligence Act (AI Act) is the first piece of comprehensive AI legislation in the world. Across its 113+ articles and 13 annexes, it sets clear rules for the governance and management of AI systems. This includes a strong consideration of risk management and fundamental rights.  

The EU AI Act intends to support innovation. By setting rules that create transparency and harmonisation for industry, it aims to support the responsible development, deployment and marketing of AI tools.  

The Act is also extraterritorial. The legal framework applies to organisations that place AI tools on the EU market, or put them into service or use them in the EU, even if they are located in a non-EU country.  

The AI Act is likely to set a benchmark against which other countries will develop their own legislation and set an international standard for the responsible deployment of AI.  

When will the EU AI Act come into force?  

Once the Council of the EU approves the final text, the AI Act will enter into force twenty days after its publication in the Official Journal of the European Union. It will be fully applicable 24 months after its publication*. 

However, the European Commission is taking a staggered approach to its implementation.  Some provisions, including articles related to prohibitions (like social credit scoring and behaviour manipulation), will be applicable six months after its publication.  

Obligations for general purpose systems will be implemented one year after publication, as will obligations around the appointment of competent authorities, creating notified bodies and the establishment of the European AI Board.  

After 24 months, all obligations under the EU AI Act, including those for high-risk systems, will come into effect.  

*NB: If you are developing or deploying an AI system that specifically intends to support the safety of another product subject to third-party conformity assessment, some obligations will not come into effect until 36 months after publication.  

 What will the EU AI Act regulate?

The EU AI Act provides a set of requirements for AI systems that are available or used in the EU. They apply even if the AI provider or deployer is outside of the EU, or where the outputs of the system are used in the EU.  

The AI Act applies to machine-based systems that: 

  • Operate with some level of autonomy 
  • Are adaptive after they are deployed 
  • Generate content, predictions, recommendations or decisions that influence the environment with which they are interacting  

As such, it applies to tools like generative AI, facial recognition and other predictive tools. 

The Act also applies to organisations that develop, provide, distribute or use AI systems, with different obligations dependent on the organisation’s role in the AI ecosystem. For example, the obligations of AI developers will differ from the obligations of users.   

The EU AI Act takes a risk-based approach to regulation, depending on the application of the system. While systems that are defined as high-risk have a range of specific and significant requirements and obligations, other systems have significantly fewer. Finally, it designates some systems as prohibited, due to unacceptable levels of risk.  

In addition, the Act sets minimum requirements for conformity assessment bodies and regulatory bodies to support a harmonised approach across the EU.  

The Act does not regulate research and development prior to systems being put on the market. It also doesn’t regulate military or national security uses of AI by member states, as these are outside the legal powers of the EU.  

What is the EU AI Act risk framework? 

The Act classifies AI systems by risk, with their own requirements and obligations. 

Unacceptable risk: 

Systems falling under this classification are deemed to be harmful and in contradiction of EU values and fundamental rights. These include rights to privacy, dignity, democracy and non-discrimination. As such, these systems are prohibited.  

This classification includes systems that engage in: 

  • Social credit scoring 
  • Targeting vulnerable individuals 
  • Certain types of biometric identification 
  • Large-scale web scraping of facial images 
  • Some predictive policing applications 
  • Real-time biometric identification by law enforcement (with exceptions) 

High risk:  

Systems falling under this classification may pose a significant risk to health, safety or fundamental rights. This will include AI systems that are products (or safety systems in products) covered by specific named EU harmonisation legislation.  

This includes medical devices and various types of vehicles, as well as the following systems mentioned in Annex III: 

  • Biometric identification and classification 
  • Management and operation of critical infrastructure 
  • Education and vocational training and assessment, including admissions  
  • Employment and performance management, including recruitment, candidate assessment and workers’ performance assessment 
  • Access to essential private and public services and benefits, including public benefits, creditworthiness and emergency services dispatch 
  • Law enforcement, including, but not limited to, risk assessment, profiling and crime analytics 
  • Migration, asylum and border control 
  • Administration of justice and democratic processes 

Systems being used for these applications have specific requirements around: 

  • Risk management 
  • Fundamental rights impact assessments 
  • Data quality and data governance 
  • Technical documentation and record keeping 
  • Transparency for users 
  • Human oversight  
  • Accuracy, robustness and cybersecurity 
  • Demonstrated compliance, including via conformity assessments 
  • Inclusion in an EU register of high-risk systems, if deployed by public authorities other than law enforcement or migration 

There are exemptions for AI in high-risk areas that are used for narrow procedural and preparatory tasks, or for detecting decision making patterns. If your organisation believes that these exemptions apply, the assessment needs to be documented.   

A key aspect of the EU AI Act is that these categories and applications are subject to change. Additional systems and applications can be included by the EC, subject to regular review and assessment.  

Systemic risk in general-purpose AI models and systems: 

General-purpose AI models are seen as posing systemic risks where they have more advanced capabilities than currently available or a wide market reach.  

The providers of general-purpose AI models that are likely to pose a systemic risk must comply with the requirements of high-risk systems.  

They must also: 

  • Continuously implement the appropriate risk management, quality management, automatic logging, incident reporting systems and accountability frameworks 
  • Keep and make available adequate technical documentation   
  • Undergo conformity assessment procedures and draw up an EU declaration of conformity 
  • Register the AI model with the EU database of high-risk AI systems  
  • Be able to demonstrate conformity upon a reasonable request from a regulator 
  • Comply with EU accessibility requirements 

Why does AI need to be regulated? 

Many countries around the world are looking to regulate AI. With the AI Act, the EU is trying to strike a balance between encouraging the uptake of AI while protecting individuals and society from potential harm.  

The EU wants to capture the potential economic, environmental and social benefits of AI innovation. However, it is cautious about the potential impacts on individual rights and freedoms, health and safety, democracy and the rule of law.  

To strike this balance, it is seeking to create a system that promotes human-centric and trustworthy AI. The EU also wants to ensure that this is done in a consistent and harmonious way across the whole EU. This is so there are not radical national differences, and so that AI technologies can be used across the internal market with minimal additional regulation.  

The EU is also seeking to align its rules on AI with lots of other regulations in areas such as product liability, health and safety, and the protection of personal data.  

For more on the potential impact of AI, you might find the following blogs interesting: 

Who does the EU AI Act apply to? 

The AI Act applies to organisations that are developing, providing, distributing or using AI systems in the EU. Even if you are not based or located in the EU, you will have obligations under the AI Act if you are developing software that relies on machine learning or statistical methods, selling such software, acting as a distributor for other organisations developing or selling software, or are a user of such systems. 

However, the legislation recognises that different actors have different roles to play in the AI ecosystem. For instance, users cannot retrain statistical models, and developers cannot guarantee human oversight. As such, the Act includes distinct obligations for each role. When combined, these provide strong assurance to the safety and fairness of AI systems. 

For more on the responsibilities surrounding AI software products, check out our guide, Enabled Software Products: First Steps to Compliance.  

Who will enforce the AI Act? 

Each member state must designate an organisation – or organisations – responsible for monitoring compliance with the EU AI Act. However, the Act does not necessarily require member states to create a new regulatory authority. 

For high-risk AI systems that are already regulated by existing competent authorities (such as financial services) the enforcement authority will be the entity already charged with regulating this sector under previous legislation.

The Act also creates an EU AI Office within the European Commission to hold expertise and capabilities on AI, and to encourage collaboration between member state regulators. This includes providing coordination support for joint investigations. The Office will have monitoring enforcement powers equivalent to the national regulators in the specific case of general-purpose AI models.  

What happens if you don’t comply with the AI Act? 

Organisations that don’t comply with the EU AI Act will face a range of fines and sanctions, depending on the offence. Member states will be responsible for determining the exact rules on enforcement measures and penalties, using the framework outlined in the AI Act.  

Member states, through their market surveillance authorities, will provide a notice to organisations when it suspects non-compliance with certain articles of the AI Act. If corrective action is not taken by the organisation, the market surveillance authorities will have the power to take ‘all appropriate provisional measures’, including prohibiting, restricting, withdrawing, or recalling the AI system from the national market. 

Member states will also have the power to issue monetary penalties. 

  • Organisations deploying the types of AI that are prohibited by the Act will be subject to fines up to €35 million or 7% of annual worldwide turnover, whichever is higher.  
  • Non-compliance with other aspects of the legislation will make an organisation subject to fines of up to €15 million or 3% of annual worldwide turnover. 
  • Organisations that are regulated under the Act and that provide false or misleading information to supervisory authorities will be subject to administrative fines of up to €7.5 million or 1 % of annual worldwide turnover.  

The fines will consider the: 

  • Nature, gravity and duration of the infringement 
  • Size and market share of the infringing operator 
  • Degree of cooperation with regulators (including if the operator notified regulators itself)  
  • Intentional or negligent character of the infringement 

What’s the expected impact of the EU AI Act? 

As the first example of AI regulation, it’s likely that the EU AI Act will become a benchmark for other countries as they consider their own regulation around AI.  

We would expect countries to take some parts of the Act as inspiration and adjust others to their own situations. As organisations become compliant with the EU Act, they may want to see similar legislation in other countries to reduce their international compliance costs and lobby their lawmakers. This occurred with European Data Protection legislation, and many countries now have data protection laws that draw inspiration from the GDPR.  

In the short term, organisations developing or deploying AI will use the implementation period before the Act comes into force to prepare for compliance. 

How can businesses prepare for the AI Act? 

If you are planning to build, purchase or use AI systems in the future, it is worth assessing your organisation’s readiness for AI Act compliance as soon as possible. There are two key factors that underpin the urgency of beginning this process.  

  • Many popular tools, including those developed by large, international software companies, already contain AI elements. As such, you may not realise that your organisation is already using AI.  
  • AI elements are proliferating in tools used by most organisations for basic functions like training, recruiting, health and wellbeing support, and background checks, among others. This means that by the time the Act comes into force, many organisations will be using high-risk AI systems, even if they have no specific plans to integrate AI.

There are steps you can take to build readiness for these new requirements now, to ensure that you are well prepared. You can find our recommended steps in full here: EU Parliament passes AI Act.  

What does the EU AI Act mean for generative AI? 

The EU AI Act does have some implications for generative AI – the technologies that can easily generate a wide range of content and media such as images, texts or video. 

The Act talks about such technologies as a subcategory of the ‘general-purpose AI model’ – general models that can perform a wide range of distinct tasks when integrated into a variety of systems and applications.  The Act requires those who provide general-purpose AI models to:  

  • Provide downstream users with good up-to-date documentation on models 
  • Put in place a policy to comply with European Union law on copyright and related rights 
  • Make a publicly available summary of the content used for training the model 

The Act also includes transparency requirements for generative AI. Providers of generative AI are required to implement effective technical measures to label AI-generated content in an identifiable and machine-readable form. This doesn’t apply to assistive functions, such as suggestions on style or tone of writing. However, deployers of AI systems generating deep fakes need to disclose that the content has been artificially manipulated.  

AI systems that are intended to interact directly with people need to be designed so that people know they’re interacting with an AI system, unless it would be obvious for a reasonably well-informed person given the context of use.  All of this information needs to be provided to the people concerned at the time of first interaction or exposure and needs to conform to accessibility requirements.  

For insights on managing the risks of generative AI, you can find our guide for the UK Civil Service here. 

Will the AI Act change?  

Unless the Council of the European Union creates any significant objections, the version of the Act approved by the European Parliament in March 2024 is likely to be the eventual law. However, there are further areas of how the act works in practice that will develop over coming years.  

The Act creates a role for the European Commission to elaborate further or to make additions to several parts of the act (in so-called ‘delegated acts’) in response to technological developments or the emergence of new risks.  

These include: 

  • Modifying the list of areas of high-risk AI 
  • Adding additional high-risk use cases 
  • Updating the rules for classification and designation of general-purpose models with systemic risk 
  • Adjusting the technical documentation requirements or adjusting the conformity assessment procedure 

The Commission will also evaluate the application of the regulation after five years and report on this to the Parliament and Council. 

Regulators empowered by the Act can also be expected to issue guidance that will clarify parts of it and start to translate this into more practical guidance for how they expect people to comply. 

The Act relies upon European Standardisation organisations to create harmonised standards reflecting the state of the art in AI, compliance with which will demonstrate compliance with the Act itself.  The Commission will issue standardisation requests, oversee that process, and adopt implementing legislation if the standards process isn’t working. 

Inevitably, once the Act becomes law, cases are likely to come before the European courts that will provide further clarity on the law in practice. Our team at Trilateral Research will be monitoring each of these developments as they occur.  

Trilateral Research – Your partners for ethical AI 

If you’re looking for support on how to build and maintain the pillars for compliant, ethics-by-design AI, get in touch with our team. We can help you develop and deploy responsible AI at scale with an up-to-date understanding of compliance requirements and an end-to-end AI governance tool, STRIAD:AI Assurance. 

Related posts

Get the latest insights from Trilateral in our new monthly article, featuring the latest developments from across our innovation and researc…

Let's discuss your career