How to provide meaningful information about the logic involved in automated decisions

Reading Time: 4 minutes

Authors:  

Trilateral Research |

Date: 15 June 2020

On 20 May 2020, the Information Commissioner’s Office (ICO) and The Alan Turing Institute published their detailed guidance ‘Explaining decisions made with AI’. This guidance assists data controllers in meeting their obligation to provide data subjects with a meaningful explanation of the logic involved under Article 22 Regulation (EU) 2016/679 (GDPR). By first introducing the legal framework, this article will present the guidance as an essential and practical tool for organisations implementing AI solutions for decision-making processes.

Automated decision-making in the GDPR

Although the GDPR is technologically neutral and does not directly reference AI or machine learning technologies, it has, however, a significant focus on automated processing of personal data for the purpose of making decisions about individuals. According to the WP29 Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, controllers may carry out profiling and ‘general’ automated decision-making. However, this processing may only take place where there are robust legal bases which may be relied upon, adherence to the principles of data protection, and an ability to uphold all applicable data subjects’ rights in accordance with Articles 13-21 GDPR.

As prescribed in Article 22 GDPR, additional safeguards and restrictions apply in the case of ‘solely’ automated decision-making. Solely automated decision-making means that the output of an AI system is directly translated into a decision without any human involvement or oversight. According to the abovementioned guidelines, human involvement is considered meaningful only when it has an influence on the results and is carried out by someone who has the authority to change the automated decision.

Article 22 GDPR provides that:

  • As a rule, there is a general prohibition on solely automated individual decision- making, including profiling that has a legal or similarly significant effect;
  • There are exceptions to this rule: where the decision is necessary for the performance of or entering into a contract, where it is authorised by Union or Member State law, or where it is based on the explicit consent of the data subject;
  • Where one of the exceptions applies, there must be measures in place to safeguard the data subject’s rights, freedoms and legitimate interests.

To meet this last condition, controllers have to give individuals the right to obtain meaningful information about the logic involved, as well as the significance and envisaged consequences of the AI-supported decision.

The need for detailed guidance

Considering that providing meaningful information about the logic of highly complex AI systems to data subjects is not an easy task, the practical implementation of this requirement has found multiple challenges and limitations. The co-badged guidance by the ICO and The Alan Turing Institute aims to give data controllers practical advice to explain AI-decision making to the affected data subjects.  Without attributing statutory responsibilities, the guidance provides useful information and ‘good practice’ for the targeted audience of each of its three parts.

Part 1 The basics of explaining AI –> for Compliance teams and DPOs

By discouraging a ‘one-size-fits-all’ approach, this part identifies six main types of explanation:

  • Rationale explanation – explaining the reasons that led to the decision, delivered in a comprehensible and non-technical way.
  • Responsibility explanation – explaining the responsible functions in the development, management and implementation of the AI system, indicating the contact point for human review.
  • Data explanation – explaining what data is used and how it is used.
  • Fairness explanation – explaining the steps in the design and development of the AI system aiming to ensure that decisions are fair and unbiased.
  • Safety and performance explanation – explaining the steps in the design and development of the AI system aiming to ensure its accuracy, security, robustness and reliability.
  • Impact explanation – explaining the steps in the design and development of the AI system aiming to monitor the impacts on individuals.

Part 2 Explaining AI in practice –> for Technical teams

By detailing how organisations can choose and deploy AI systems as well as their appropriate explanation, this part provides six tasks for organisations to follow:

  1. Select priority explanations by considering the domain, use case and impact on the individual;
  2. Collect and pre-process your data in an explanation-aware manner;
  3. Build your system to ensure you are able to extract relevant information for a range of explanation types;
  4. Translate the rationale of your system’s results into useable and easily understandable reasons;
  5. Prepare implementers to deploy your AI system;
  6. Consider how to build and present your explanation.

Part 3 What explaining AI means for your organisation –> for Senior management

In the final part of this guidance, detailed advice for senior management on how to identify the appropriate roles and responsibilities necessary for developing explanations, as well as the policies, procedures, and documentation is provided. This practical advice is particularly valuable for organisations which are progressing on from theoretical considerations relating to the technology development plans in compliance with data protection. The practices illustrated in this document can be tailored for specific cases to cover the processing undertaken by the controller and their processors, which is particularly valuable in complex vendor management scenarios.

The bottom line

If your business or organisation intends to implement AI systems processing personal data for automated decision-making, a Data Protection Impact Assessment (DPIA) should be conducted to assess the risks associated with implementing such technologies. Carrying out a DPIA can also assist greatly in understanding the level of human involvement that is foreseen in the process. The DPIA process can also be used to explore the workings and mechanisms that are envisioned, which will inform the creation of the necessary requirement to provide a meaningful explanation of the logic of the proposed AI system.

Additionally, Trilateral has recently launched a new service centred around algorithmic transparency. The aim of the service is to provide technical support to clients to ensure the presence of bias in their algorithms is limited, and their software-based services do not discriminate or penalise certain demographics of customers, for example, ethnic minorities. The transparency and fairness checks that Trilateral offers include:

  • Model evaluation & assessment: accuracy, precision, recall & coefficient of determination, probability calibration, class imbalance inspection & attainment of representative samples.
  • Brute force examination of black-box algorithms to improve transparency, auditing and accountability.
  • Identify acceptable trade-offs between accuracy and interpretability to enable simplification of complex black-box algorithms into clear transparent rule-based systems.
  • Locally faithful explanations of complex decision boundaries using tools such as Local Interpretable Model-agnostic Explanations (LIME).
  • Detailed breakdown of model performance, as opposed to an aggregated view, to trace appropriateness of decisions.

Should you need further assistance in understanding how to comply with these data protection requirements, our Data Governance and Cyber Risk Team is available to help.

Related posts