Do We Really Need Another Recommendation for Ethical AI?

Reading Time: 5 minutes


Dr Zachary J. Goldberg
- Ethics Innovation Manager

Date: 24 February 2022

On Why the UNESCO Recommendation Adds Value to a Crowded Field of Guidance Documents.

Most of us don’t spend a single day without using an artificially intelligent (AI) technology or AI-enhanced device. AI appears in our phones, music and TV streaming platforms, smart homes, classrooms, and offices. It powers financial services, creates personalized advertisements, predicts the weather, and is used in road, railway, and aviation transport. It is used for policing and security purposes, and even in outer space. As a technology that impacts almost every aspect of our lives and wellbeing, AI tools require robust ethical assessments to determine their potential for promoting as well as transgressing our ethical values, societal values, fundamental rights, and human rights.

In an attempt to keep pace with the growth of AI technologies in society (including understanding their positive and negative impacts), scholars and policy-makers have published numerous recommendations and guidance documents covering principles, guidelines, codes, or frameworks to bolster ethical guide rails for digital innovation – all with the aim of benefitting humanity and protecting individual rights. There have been over seventy recommendations on the ethics of AI published in just the last three years (Algorithm Watch 2019; Floridi 2019).

Despite this volume of guidance documents, on 24 November 2021, the UNESCO Social And Human Sciences Commission adopted a new Recommendation on the Ethics of Artificial Intelligence. With so many recommendations already available, and with some of these recommendations (such as the HLEG AI Guidelines for Trustworthy AI) already exerting significant influence on ethicists and AI developers, it is worth examining what the UNESCO recommendation contributes to the pursuit of ethical AI.

First, it is noteworthy that the UNESCO framework is based on human rights. This approach differs from many other recommendations which are based on values, principles, or fundamental rights. Whereas fundamental rights establish rights belonging to a particular citizenry, human rights purportedly extend to all humans regardless of time, place, national origin, or citizenship. This difference does not necessarily entail a significant disparity concerning which rights are considered, but it does reflect the ambitiously global scope of the UNESCO recommendation.

This makes sense; there is no question that AI ethics is a global phenomenon. AI tools do not cease operation at national borders, as is clear not only by the fact that the internet allows people to interact with AI software independent of their geographic location, but also by the widespread use of AI technologies implemented at international borders to enforce border security. As a consequence, the pressing issues of AI ethics cannot be addressed unilaterally, which makes a global human rights approach quite relevant.

Secondly, the audience of the UNESCO recommendation differs from that of many other recommendations. By adopting a human rights-based approach to AI ethics, UNESCO is communicating directly to states, especially members of the United Nations. This also makes good sense, as states are the entities that protect or violate human rights. Although states are comprised of organizations, groups, and individuals, they are still “agents” in their own right, as they possess the minimally necessary capacities required for decision-making and action (see e.g., French 1979; Isaacs, 2011; List & Petit 2011; Tollefsen 2015).

As you have probably noticed, our ordinary everyday speech reflects this fact. We say that England defeated France at Agincourt in 1415,  the United States turned away Jewish refugees fleeing the Nazi regime, or China has violated individuals’ human rights. Furthermore, when we consider an act that constitutes a human rights violation when perpetrated by a state, we do not ordinarily call the same act a human rights violation when perpetrated by an individual. For example, when an individual person imprisons and tortures someone, it seems odd to call this a human rights violation, although it is clearly an atrocious ethical violation and a crime. However, when a state carries out the same act, we call it human rights abuse.

In directing its recommendation towards (all) states, UNESCO’s approach has several ethical benefits. It further reflects the intended global reach of UNESCO’s document and its intention to “provide a basis to make AI systems work for the good of humanity, individuals, societies and the environment and ecosystems, and to prevent harm” (p.5) at a comprehensive, international level. States can support this aim through the adoption of compliance regulations as well as by taking steps to promote and facilitate the fulfillment of the recommendation by corporations, SMEs, universities, and NGOs. By endorsing and even rewarding (e.g., through tax relief) organizations that adopt and fulfill the UNESCO recommendation, states could play a key role in its implementation.

Thirdly, UNESCO emphasizes that special attention must be paid to the needs of Low-to-Middle Income Countries (LMICs), including Least Developed Countries (LDCs), Landlocked Developing Countries (LLDCs), and Small Island Developing States (SIDS) (CITE). This is essentially a call to pay better attention to the least well-off and most vulnerable states. Such an appeal echoes John Rawls’ 2nd principle of justice – often referred to as “The Difference Principle” – which declares that once certain fundamental rights are secured, some inequalities among individuals or institutions concerning the distribution of goods or opportunities can be justified, so long as these inequalities are to the greatest benefit of the least well off in society.

The aim of “The Difference Principle” is to ensure that morally arbitrary factors (e.g., the geography where one is born) do not determine one’s opportunities in life. Applied to the context of the UNESCO recommendation, UN member states should strive to ensure that LMICs derive greater benefits than other states from AI tools and share less of the burden from any potential negative consequences.

Fourthly, as UNESCO is the author of the recommendation its attention is understandably focused on promoting collaboration among states through education, the sciences, culture, communication, and information. UNESCO warns that AI tools can pose a significant threat to cultural, social, and biological diversity, thereby furthering social or economic divides. With this emphasis on the intersectionality of society, culture, education, environment (including animal welfare), human interaction, and technology, the UNECSO recommendation does not provide a mere list of principles for readers to follow – such an approach would be far too simple to reflect the complexity of the ethical issues involved with the widespread use of AI.

Instead, UNESCO’s recommendation “approaches AI ethics as a systematic normative reflection, based on a holistic, comprehensive, multicultural and evolving framework of interdependent values, principles, and actions that can guide societies in dealing responsibly with the known and unknown impacts of AI technologies on human beings, societies, and the environment and ecosystems, and offers them a basis to accept or reject AI technologies” (p. 3). This clearly holistic approach – with its emphasis on ethical reflection – is a welcome one. It rightly encourages readers to develop their own ability to reflect upon and assess the ethics of AI. The implication here—which is the same one I make in my Tedx talk—is that the capacity for independent and sincere ethical thinking is at the heart of advancing ethical AI.

UNESCO’s recommendation strides confidently into what has become a crowded field of ethical AI discourse, yet its contributions do more than add yet another voice. In thinking about the ethics of AI in terms of global information, communication, culture, education, research, ethical reflection, and socio-economic and political stability, while also aiming to benefit the least advantaged societies, UNESCO’s recommendation provides needed guidance on why and how states and stakeholders can address the ethics of AI.


Algorithm Watch. 2019. The AI Ethics Guidelines Global Inventory. https://algorithmwatch. org/en/project/ai-ethics-guidelines-global-inventory/.

Floridi, L. 2019. Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical. Available at SSRN: or

French, P. 1979. The Corporation as a Moral Person. American Philosophical Quarterly, Vol. 16, No. 3 (Jul., 1979), pp. 207-215

Isaacs, T. 2011. Moral Responsibility in Collective Contexts. Oxford: OUP.

List, C and P. Pettit. 2011. Group Agency: The Possibility, Design, and Status of Corporate Agents. Oxford: OUP.

Tollefsen, D. 2015. Groups as Agents. Cambridge: Polity.

Related posts