ProPublica, a nonprofit media organisation, published a seminal article in 2016, entitled Machine Bias. In this piece, they claim that the risk assessment software used by the US judiciary to classify future defendants was biased against the black community.
The algorithm underpinning the risk assessment, COMPAS, analysed past convicts ‘historical data in order to assess their likelihood to re-offend’.
The scores obtained by the system were then used to help judges make more informed decisions when granting freedom in their sentences.
However, since machines learning are fed with data by human beings they may be fed with data that are biased on the first place. As a result, they would inherit that bias delivering wrong models.
In the COMPAS case, machines learning fed with historical data, ultimately, were not actually supporting judges in making more informed decisions but, potentially, were misleading them.
Biased Machines
The Propublica team elucidated a few cases where black and white people charged with minor and major criminal offences were rated, respectively, with high and low risk to re-offend. However, two years later, they found that these predictions were wrong: the blacks did not re-offend, while the whites did. Analysing the distribution of scores, the team concluded that blacks were more likely to obtain a higher risk score than whites (see Figure 1 and 2).


Moreover, they calculated the accuracy (individuals correctly labelled) of COMPAS being 61%. The distribution of errors in the classification algorithm was striking. Black people were almost twice as likely as white people to be given a high-risk score in spite of actually not re-offending two years later. On the other hand, white convicts were found to be much more likely to be mislabeled as a low risk than black convicts.
We want Fair Machines
The ProPublica team’s conclusions drew the attention of the research community, who began to investigate fairness in machine learning algorithms (ML), as Figure 3 below shows. Researchers started to develop methods to mitigate this risk of delivering wrong biased models so as to ensure fairer outputs.

But how can we mathematically define fairness? The FAT* (Fairness, Accountability and Transparency) community is yet to find an answer to this question. Nowadays, there exist 22 definitions of fairness. Each one of these is oriented towards a different understanding of the concept[2]. Some examples of these definitions based on the confusion matrix (see Figure 4) are provided below:

- True Positive/Negative Rate: the fraction of positive/negative outcomes correctly classified to be in the positive/negative class out of all actual positive/negative cases.
- Positive/Negative Predictive Rate: the fraction of positive/negative predictions correctly classified to be in the positive/negative class out of all actual positive/negative cases.
- False Positive/Negative Rate: the fraction of positive/negative cases incorrectly classified to be in the positive/negative class out of all actual negative/positive cases.
- False Positive/Negative Error Rate: A classifier satisfies this definition if both protected and unprotected groups have equal False Positive/Negative
- Group Fairness: A classifier satisfies this definition if subjects in both protected and unprotected groups (gender, religion, ethnicity, etc.) have an equal probability of being assigned to the positive/negative predicted class.
Coming back to ProPublica’s article, which of these definitions of fairness would be appropriate in this case?
As Deborah Hellman explains in Measuring Algorithmic Fairness, it depends on the question you seek to answer. You might ask what the probability is that a Black or White person will re-offend given the score of the algorithm (Positive Predictive Rate), or instead what the probability is that an actual black or white re-offended will get an accurate score (True Positive Rate).
In conclusion, the new era of Fairness will encourage data scientist to be more aware of the performance of their algorithms, since it has been proved that its consequences can put more impediments to build a more equitable society.
Trilateral offers their multidisciplinary services of algorithmic transparency, including fairness and transparency evaluation and ethical, legal, social and economic impact, from both a technical and legal/social science perspective. Please read our blog on Combating child exploitation with ethically designed technology to have an insight into our approach.
For more information please contact our team.