Artificial Intelligence (AI) can provide benefits to individuals and society, for example in helping achieve the UN Sustainable Development Goals or realising the European Green Deal. At the same time, AI raises many ethical and social concerns, like bias and discrimination, violations of fundamental rights, and unfair distribution of socio-economic and political power.
On April 21, 2021, the European Commission (the Commission), set out its proposal for a legal framework for AI. With this proposed Regulation, the Commission intends to set the global standard for trustworthy AI. The purpose of the Regulation is two-fold: to strengthen European uptake, investment, and innovation in AI technologies and to ensure a “high level of protection of public interests, such as health and safety and the protection of fundamental rights, as recognised and protected by Union law.” The rules apply to public and private actors whose AI is placed on the European Market or affects people in the EU, regardless of whether the provider or user is physically located in the EU.
AI is defined in the proposed Regulation as “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with” (Art. 3(1)). This definition was crafted “to be as technology neutral and future proof as possible, taking into account the fast technological and market developments related to AI.”
What are the potential impacts of the proposed Regulation?
The proposed Regulation adopts a four-tiered risk-based approach, where AI systems are subject to different rules depending on the level of risk posed to fundamental rights and safety. The first category of unacceptable AI, which pose a clear threat, are banned; this includes social scoring and AI that manipulates human behavior. The second category are high-risk AI systems, classified by the function performed by and the specific purpose of the AI. Such systems will only be able to be placed on the Union market or put into service if they comply with certain mandatory requirements.
The proposed Regulation establishes a methodology for assessing risk and identifies specific high-risk domains, including critical infrastructure, law enforcement, and justice systems. High-risk AI will be subject to strict regulatory requirements, including ex-ante conformity assessments, appropriate human oversight, and transparency and traceability obligations. Biometric identification (which includes facial recognition) is specifically addressed; real-time remote biometric identification, for example, is prohibited, except in narrowly defined exceptions when used by law enforcement agencies. The third category of limited-risk AI systems are only subject to transparency requirements (e.g., informing human user of AI-generated images). The final category of minimal-risk AI systems are not subjected to any requirements under the Regulation, but providers can voluntarily apply codes of conduct.
How would this Regulation work in practice?
Implementation of the Regulation would be supervised by designated national authorities and facilitated by a newly created European AI Board (AI Board). The national authorities, designated by each Member State, would be responsible for overseeing application of the rules, carrying out market surveillance and serving as points of contact. The AI Board, comprised of a representative from each national authority, the EU Data Protection Supervisor, and the Commission, would issue recommendations and guidance to the Commission, be a centre of competence, and support standardisation efforts.
What is the significance of the proposed Regulation?
The Regulation is significant because it seeks to make the EU a global leader in AI regulation and ensure the development of safe, secure, trustworthy and ethical AI. It will facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation through, for example, the development of common mandatory requirements applicable to the design and development of certain AI systems before they are placed on the market.