How to achieve GDPR-compliant anonymisation according to the French CNIL

Reading Time: 3 minutes


Trilateral Research |

Date: 21 June 2020

Regulation (EU) 2016/679 (GDPR) applies only to personal data, i.e., information that relates to an identified or identifiable individual. Non-personal data, including personal data that has been anonymised fall outside the scope of the GDPR. Despite the strict and clear scope of the GDPR’s field of application, the distinction between personal and anonymous data is becoming a technical fallacy. Indeed, recent advancements in technology and data science have proven that anonymised data sets cannot be always protected against re-identification.

In addition, not all European authorities have updated their guidance on anonymisation under the GDPR. Indeed, in the UK, the ICO Code on anonymisation dates back to 2012, whereas the European guidance by Article 29 Working Party’s  was issued in 2007. According to the ICO guide of 2012, anonymisation is “the process of rendering data into a form which does not identify individuals and where identification is not likely to take place.”

Contrary to the ICO guidance of 2012, in 2019 the Irish Data Protection Commission (DPC) advised that “anonymisation of data means processing it with the aim of irreversibly preventing the identification of the individual to whom it relates.” Similar to this, in May 2020, the French Data Protection Authority (CNIL) released guidance to shed light on how identifiability should be understood and assessed under the GDPR. CNIL’s guidance is aligned with the guidance released by the DPC and provides practical suggestions for the application of anonymisation methods.

In countries where the authoritative guidance on anonymisation has not been updated under the GDPR, organisations will need to seek clarifications outside their jurisdiction to shield their GDPR compliance and consider best practices. However, the CNIL’s recent guidance acts as a useful primer on anonymisation thresholds and methods for organisations looking for actionable information.

The concept and criteria of anonymisation

Overall, CNIL adheres to the previous standards and criteria of anonymisation as laid down by the Article 29 Working Party. It defines anonymisation as “processing which consists of using a set of techniques so as to make it impossible, in practice, to identify the person by any means whatsoever and irreversibly.” It adds that anonymisation “aims to eliminate any possibility of re-identification”. Anonymisation is thus presented as an irreversible and absolute state.

CNIL specifies the cumulative criteria against which an assessment as to whether data has been successfully anonymised is to be made. Again, the following criteria suggested by CNIL are drawn from previous European guidance:

  1. Individualisation; It should not be possible to isolate an individual in a data set.
  2. Correlation; It should not be possible to link separate sets of data concerning the same individual.
  3. Inference; It should not be possible to infer, almost with certainty, new information about an individual.

If these three criteria are not fully met, data controllers should conduct an in-depth assessment of the risks of identification to demonstrate that “the risk of re-identification with reasonable means is zero”. Controllers should also consider carrying out regular checks to monitor and verify whether the concerned information remains anonymous.

The suggested anonymisation methods

The CNIL advises that anonymisation is not strictly required under the GDPR but anonymisation enables the further processing of data and their retention beyond the originally defined retention periods. To this end, CNIL also categorises data anonymisation methods as falling under two main categories:

  1. Randomisation

Randomisation consists of modifying the attributes in a dataset so that they are less precise, while preserving the overall distribution. This method reduces the risk of inference while maintaining the accuracy of statistical analysis.

  1. Generalisation

Generalisation consists of modifying the scale of the attributes of the data sets, or their order of magnitude, in order to ensure that they are common to a set of people. This method can address the risk of individualisation and correlation.

To identify the identification risk and the appropriate anonymisation method, the following elements should be considered:

  • The personal data categories to be anonymised and stored;
  • The identification and removal of direct identifiers, which could enable easy identification of the concerned data subjects;
  • The distinction between crucial information and secondary or unnecessary information and the deletion of the latter;
  • The purpose of data processing of the anonymised data sets.

Take home message

CNIL’s guidance leaves no room for context-specific anonymisation, where data can vary from personal to anonymous depending on the circumstances of data processing and the applied privacy-preserving measures. This guidance also seems to neglect the abilities of interconnected and AI-driven technologies that challenge the concept of irreversible anonymisation.

Nonetheless, this guidance is of particular importance to organisations in and outside France since it clarifies the concept of anonymisation under the GDPR. CNIL’s guidance also builds on the traditional European understanding of anonymisation under the previous data protection framework. In this context, in the absence of national guidance, the guidance of leading authorities, such as CNIL, could be regarded as a best practice and a European model until local guidance is released.

We have previously provided several multidisciplinary key considerations and supporting measures for organisations in order to minimise the risk of re-identifiability. Trilateral’s advisors can support you in meeting your compliance needs. For more information, please visit our Data Governance and Cyber-Risk Service page and do not hesitate to contact a member of our team.

Related posts

AI is rapidly transforming industries. Take for example the legal field, which is traditionally conservative in relation to technology. More…

Let's discuss your career