The ACM Conference Highlights: Fairness, Accountability, and Transparency in Socio-Technical Systems

Reading Time: 3 minutes

Authors:  

Trilateral Research

Date: 27 February 2020

Fairness, accountability, and transparency in socio-technical systems is a research area that has attracted growing interest.

Socio-technical systems shape our day-to-day experience; therefore, it is essential to address the problems they may cause from different perspectives (e.g., legal, philosophical, educational among others). The work and workshops at this year’s ACM Conference on Fairness, Accountability, and Transparency (ACM FAccT, former ACM FAT*) in Barcelona demonstrated that there exists a strong and diverse community with a shared mission to make fair use of technology and take responsibility to report its misuse.

Trilateral Research is proud to be part of this international community since our cross-disciplinary teams ensure that the work we develop incorporates the principles and best practices of accountable algorithms.

Below, we present the highlights and key concepts from this year’s ACM conference.

Accountability Track

The Accountability track is defined as the obligation to report, explain, or justify algorithmic decision-making and was a key topic of discussion at the conference. Given that many decisions are informed by automated systems that have a direct consequence on society, such as those digital tools deployed to manage access to social safety net benefits and services, it is crucial to understand who coded it, for what reason, and how.

Justifications such as “The algorithm did it”, or “I am just a data scientist and do not care about the consequences of my models” are no longer valid today. Therefore, when we rely on accountability one of the questions that should be considered is “What to account for when accounting for algorithms”.

In her work, Wieringa reviewed more than 200 articles in this field and analysed the material relying on Bovens’s work who structured the problem of algorithmic accountability around five factors:

  • the actor or responsible
  • the forum or to whom that account is directed
  • the relationship between the two
  • the content and the criteria of the account
  • the effects emanating from the account

As Wieringa pointed out in her presentation at the conference, accountability has different risks such as the problem of many hands, virtue washing, and unfamiliarity with the domain, among others. Although GDPR was introduced to provide EU citizens with greater protection towards these systems, Sunny Seon Kang explained in “The GDPR Paradox”  that this law does not take into account the transparency of computational systems and sets no rules on how governments should ensure that engineers can prove the “necessity and proportionality” of algorithms, i.e., why the development of the system is needed and about how many people it will impact.

Fairness

Fairness was another key concept discussed during the conference. Fairness is defined as the responsibility to ensure that algorithmic decisions do not create discriminatory or unjust impacts when comparing different attributes such as gender, religion, race, and age, among others.

Nowadays, there exist more than 20 mathematical definitions of fairness that consider different nuances of this concept. These definitions were brought to light in the hope that computer scientists, who today act as de-facto social planners as a result of the automatisation of public policy institutions, will use them.

In their work Fair Classification and Social Welfare, speakers Hu and Chen come up with the question “how do leading notions of fairness map onto longer-standing notions of social welfare?”

Here, the authors analyse whether fairness-based optimisation problems benefit disadvantaged social individuals in terms of welfare. Through a sound experimental framework, they conclude that adding fairness definitions as a constraint to optimisation problems, where precision is the objective function to be maximised and fairness is added as a constraint, do not necessarily improve outcomes for marginalised groups and can actually worsen welfare provisions for all individuals, advantaged and disadvantaged alike.

Other highlights

Natural language processing techniques were also assessed by attendees. Over the last few years, several methods have been presented to prevent Bias in Word Embeddings, as maintained by Papakyriakopoulos et al. However, the authors developed a method to detect bias in word embeddings and proved that two of the existing methodologies to mitigate bias are limited. Moreover, they claim that bias word embeddings can be used to detect bias in new data.

Explainability, Auditing, Sensitive Attributes, Education, Data Collection, Values, Ethics and Policy were among other topics discussed which reflected the conference’s multidisciplinary nature.

Our commitment

At Trilateral, we continue to contribute to this research area at varying levels:

  1. Our cross-disciplinary team collaborates across social science and technology, to bring insights from each to capture the benefits of data-driven innovation from different perspectives.
  2. The technical team within Trilateral is devoted to developing fair, accountable, and transparent machine learning models which rely on the principles of this community, usually driven by computational methods published in this conference.
  3. The legal and ethical team within Trilateral ensures that all aspects concerning the privacy and ethics of projects are interrogated.

At Trilateral, we continue to contribute to this research area via our multidisciplinary services of algorithmic transparency, including fairness and transparency evaluation and ethical, legal, social and economic impact assessments, from a technical, legal and social science perspective.

Our efforts are in areas where the application of our research can make a difference in enhancing societal wellbeing.

For more information please contact our team.

Related posts