Responsible AI needs AI literacy

Reading Time: 4 minutes

Authors:  

Tim Jacquemard | Senior Research Analyst

Date: 28 May 2024

AI is rapidly transforming industries. Take for example the legal field, which is traditionally conservative in relation to technology. More than a quarter of legal professionals in the UK use generative AI on a monthly basis. A survey showed that 62% of UK lawyers changed their daily operations as a result of these AI tools.  

The capabilities of generative AI have grown exponentially within a short period of time and are unprecedented.  As a result, many professionals need to rapidly learn  how to use these tools in a responsible manner. The EU acknowledges the importance of ensuring that people understand AI tools and the EU AI Act contains legal obligations to encourage AI literacy .  

If you provide AI solutions or deploy AI tools, you may have a legal obligation to promote AI literacy. Within this article, we explore AI literacy in the context of the EU AI Act, its importance for a responsible use and development of AI, and tools to promote it. 

AI literacy and the EU AI Act 

Article 4 of the EU AI Act obliges both providers as well deployers of AI systems to take measures to ensure that their staff and the people working on their behalf are sufficiently AI literate. The EU AI Act defines ‘AI literacy’ as the skills, knowledge and understanding of EU AI Act-related rights and obligations, as well as an understanding of AI-related risks and benefits (Recital 20, Article 4). AI literacy enables people to make informed decisions about AI systems including awareness of their opportunities and risks.  

AI literacy is not meant to hinder the further deployment of AI systems. Instead, the focus on AI literacy is motivated by a need to support the transition to a society in which AI plays an important role. Following the EU AI Act, AI literacy helps achieve “the greatest benefits of AI systems” while protecting fundamental EU values such as health, safety and democratic processes (Recital 20). The EU AI Act stresses societal importance of AI literacy. While only providers and deployers have obligations towards AI literacy, the EU AI Act mentions the importance of AI literacy to all other affected parties that use or encounter AI in daily life (Recital 20).  

AI benefits and limitations 

Part of AI literacy is understanding the risks and benefits of current AI technology. The benefits of AI may warrant the previously mentioned interest of the legal field in generative AI but the challenges of using generative AI are also becoming apparent. 

  • Research showed that when law students used GPT 4 to complete legal tasks they were nearly a third faster than when they did not use AI. Even though the legal analysis mostly did not significantly improve with AI, the quality of work of lower-skilled students’ benefitted slightly from the use of AI for one of the legal tasks. The use of AI did significantly improve student satisfaction in completing the assignment. The researchers behind the study encourage the use of AI tools to law schools, lawyers, judges, and clients (Choi, Monahan & Schwarcz, 2023).
  • In May 2023, a recently graduated lawyer used ChatGPT to reduce his workload. After the court filing, he realised  the AI chatbot had made up court cases. He reported it to the court and apologised, but was fired from his law firm. AI-powered Chatbots often generate inaccurate information. Research showed that GPT 3.5 provided incorrect legal facts 69% of the time. Llama-2 performed even worse and was wrong 88% of the time ( Dahl, Magesh, Suzgun, & Ho, 2024).

Underlying these benefits and risks of AI systems, there are some fundamental limitations that both providers as well as deployers need to be aware of.  

  • Large language models (LLMs) such as GPT 4 produce coherent and human-like answers. These models are very good at learning relationships between sequences of characters and predicting what a good answer may look like. However, they do not understand the meaning of words or sentences. In other words, LLMs may be excellent at predicting what looks like a fact but not (yet) at fact finding. As a result, they may produce inaccurate answers. 
  • AI models also lack common sense. An AI model does not know what it knows and what it does not know. Similarly, AI models cannot distinguish appropriate from inappropriate outputs. A supermarket in New Zealand used an LLM to help its customers create meals based on the ingredients they have at home. When customers started adding other household items, the meal-planner provided recipes for poison. To a LLM, a recipe for poison is no different than a recipe for a meal.  

A responsible use of AI requires an understanding of these limitations. If the lawyer in the example above understood the risk of inaccurate answers, he might not have used GPT to identify court cases without reviewing its answers. Understanding these limitations can also help developers build responsible AI. Anticipating the potential for misuse, the supermarket chain could have preventatively put in safeguards to prevent harmful recipes.  

How to improve AI literacy 

AI literacy can be improved in different ways. Depending on organisational needs, providers or deployers may choose one or more of these tools.  

  • Training and education: Professionals can become AI literate through training. AI literacy requires an understanding of the basics of AI systems including how they work and their limitations and benefits. AI literacy does not require someone to become a specialist in AI technology.  
  • AI risk assessments: An AI risk assessment is a systematic approach to identify and mitigate risks throughout the entire lifecycle of an AI system. Such assessments help organisations track and communicate risks to their staff.  
  • Codes of conducts: The EU AI Act encourages organisations to develop voluntary codes of conduct to advance AI literacy among persons dealing with the development, operation and use of AI (Recital 20). Providers and deployers can choose to develop their own codes of conduct (Article 96).  
  • Socio-tech solutions: The design of an AI system can be instrumental to promote AI literacy. At Trilateral, our  ethics-by-design approach promotes transparency as a key value of the user experience. Our approach combines the expertise of developers, data scientists, ethicists, legal experts, end-users and stakeholders. 

The EU AI Act obliges providers and deployers of AI to ensure sufficient levels of AI literacy. In order to maximise benefits of AI applications and avoid harm, AI literacy is necessary. Already, professionals and organisations experience both the advantages as well as pitfalls of AI. Providers and deployers have several tools available to help ensure AI literacy. These tools help people avoid potential harms of the technology and reap its benefits. 

The Trilateral Ethics Innovation Team has extensive experience and expertise in assisting organisations with responsible use of AI and compliance with the AI Act. If you need help, please contact us by email dcs@trilateralresearch.com to discuss your requirements. Our team would be happy to help. 

Related posts

Let's discuss your career