UK: +44 (0) 207 0528 285 | IE: +353 (0) 51 833 958
UK: +44 (0) 2070528285
IE: +353 (0) 51 833 958

Human intervention and human oversight in the GDPR and AI Act

Reading Time: 5 minutes

Authors:  

Sara Domingo
- Researcher and Data Protection Advisor

Date: 31 May 2022

Differences and Practical Challenges

The GDPR introduced the notion of ‘human intervention’ as a way to prevent, in certain circumstances, decision-making based solely on automated means. The forthcoming AI proposal for a Regulation (“AI Act”) uses the term ‘human oversight’ and sets out certain obligations. For instance, in December 2021, the European Committee of the Regions proposed to introduce the necessity for human intervention in high-risk AI systems intended to be used by public authorities to evaluate the eligibility of natural persons for public benefits and to evaluate creditworthiness.

Although both concepts might seem similar, they are different, and pose practical challenges that deserve greater attention. What is the difference between ‘human intervention’ and ‘human oversight’?  How would these concepts work together? In this article, we will try to shed some light on the definitions and interpretations of both concepts, how they are regulated in the GDPR and the AI Act and how they would work in practice.

Article 22 of the GDPR

Article 22.1 of the GDPR sets an explicit prohibition to solely rely on automated processing for decisions which produce legal effects or similarly affects data subjects. The GDPR introduces here human intervention as an essential component in the decision-making process.

Three exceptions to this general rule can be found in Article 22.2, whereby the sole use of automated processing would be allowed when:

  • the data subject has given consent

  • it is necessary for entering into, or the performance of a contract

  • it is authorised by Union or Member State law.

Even when the first and second exception apply, the GDPR imposes the obligation to implement suitable safeguards including the right to obtain human intervention a posteriori when the decision has been made.

Human intervention here can be related to the concept of ‘human in the loop’ explained in the section below, which entails that no decision shall be made solely by a machine (or software) until it is reviewed by a human who will take into account other factors in making the final decision. The A29WP Guidelines on Automated individual decision-making and Profiling (adopted by the EDPB) refer to human intervention as ‘additional meaningful intervention’ carried out by humans before any decision is applied to an individual and by someone who has the authority and competence to change the decision.

Human oversight and AI

The EC Ethics Guidelines for Trustworthy AI published in 2019, already identifies ‘human agency and oversight’ as one of the core principles of ethical AI. It introduced the concepts of ‘human in the loop’ as the capability of human intervention and ‘human on the loop’ as the capability to oversee the overall activity.

The Commission White Paper on Artificial Intelligence published in 2020, lists different mechanisms to achieve effective human oversight providing some examples, which according to the Brussels Privacy Hub working paper on Humans in the GDPR and AIA Governance can be grouped into 4 different governance mechanisms as follows:

Mechanism (White-Paper on AI)Example (White-Paper on AI)Governance Mechanism (interpretation Brussels Privacy Hub)

The output of the AI system does not become effective unless it has been previously reviewed and validated by a human.  
 The rejection of an application for social security benefits may be taken by a human only.  Human in the loop

The output of the AI system becomes immediately effective, but human intervention is ensured afterwards.  
The rejection of an application for a credit card may be processed by an AI system, but human review must be possible afterwards.  Human out of the loop

Monitoring of the AI system while in operation and the ability to intervene in real time and deactivate.  
A stop button or procedure is available in a driverless car when a human determines that car operation is not safe.  Human on the loop Technical feature
In the design phase, by imposing operational constraints on the AI system.  
A driverless car shall stop operating in certain conditions of low visibility when sensors may become less reliable or shall maintain a certain distance in any given condition from the preceding vehicle  
Technical feature
+ Human back in control

When it comes to the AI Act, Article 14 (as it currently stands) simply sets a general obligation for high-risk AI systems (those listed in Annex III) to be designed and deployed in a way that can be effectively overseen by natural persons and prohibits the use of remote biometric identification without human intervention. In this way, the AI Act, fails to identify and regulate mechanisms to effectively implement human oversight. As it stands, users and providers of high-risk AI systems shall choose the appropriate means for human oversight safeguarding fundamental rights and shall be accountable for it, but it does not specify when and where humans shall have the final word on the decision or when just human  monitoring of the system (and not the system’s outputs) will be enough.

Both Regulations in practice

In practice, most of the systems falling under the category of high-risk in the AI Act will use personal data and therefore will be subject to the GDPR. It is most likely that the flexible and lax human oversight requirement in the AI Act will need to be bolstered by the firm human intervention requirement in the GDPR.

Is it possible that when the AI Act does not require implementing ‘humans in the loop’ the GDPR will? Most likely, yes. And even in the cases where human intervention is bypassed by the exceptions of the GDPR (consent, performance of a contract or established by Union or Member State Law) the GDPR will still safeguard data subjects’ rights by applying the right to human intervention a posteriori and requiring constraints to the processing of special categories of data.

It is worth noting, that developers and users of AI should be wary about introducing ex-ante human involvement of some degree just to circumvent the prohibition enshrined in GDPR Article 22 (as it only refers to decisions based solely on automated means), since the accountability principle will always stand. Accountability means that controllers are responsible for complying with all the GDPR principles, including lawfulness, fairness, transparency, and accuracy, that is to say, controllers are responsible for the outputs of AI systems.  If the use of an AI system, with or without human involvement, generates unfair or unlawful outputs for data subjects, the data controller will be held liable for it. For instance, the Dutch Data Protection Authority fined the Dutch Administration €2.75 million last year for the ‘unlawful, and discriminatory’ manner in which the tax authority using AI, processed personal data of child care benefit applicants.

Conclusions

The debate around human in the loop decisions is not new but is certainly being underdiscussed in current reports concerning the finalisation and eventual entry into force of the AI Act. Further reflecting this neglect of an important component of ethical AI,, the UK has recently suggested to remove the human intervention requirement from the UK GDPR.

Meaningful means of human oversight are difficult to accomplish in practice and pose a real challenge for AI users and developers. More guidance will be needed in the near future; thus, the AI Act should take the opportunity to elaborate on this matter avoiding confusion and legal uncertainty. At the very minimum, the AI Act should provide definitions for the different levels of human oversight and intervention as it was done in the White Paper on AI and the Ethics Guidelines for trustworthy AI and should further regulate its application.

Related posts