Making AI responsible: What do you need to know?

Reading Time: 6 minutes

Authors:  

Dr Rachel Finn | Director, Data Protection & Cyber-risk Services / Head of Irish Operations
Trilateral Research |

Date: 7 September 2023

What exactly is Artificial Intelligence (AI)? AI is the simulation of human intelligence by software and machines – using vast data sets to provide analysis and insights that we could not produce by ourselves.  It includes decision support or automated decision solutions that leverage existing data sets and apply predictive analytics, natural language processing tools and supervised and unsupervised machine learning to specific problems. They can range from simple statistical trend analysis tools to fully automated, interactive solutions. At Trilateral Research, we develop ethical and responsible AI – taking AI solutions one step further to ensure trust and transparency are embedded at every step. 

 

What are the benefits of responsible AI? 

Developing and using AI responsibly gives you an enhanced solution. Along with the benefits of AI, (improved productivity and resource allocation), you’ll build customer trust, provide better insights, and introduce scalable benefits. 

  • Increased trust: Building transparent AI with clear governance and accountabilities enhances your customer’s trust in both you and the solution you provide. 
  • Better insights: A deep analysis of better data can produce new information that optimises outcomes for your organisation. 
  • Scalable benefits: Ensuring you understand and integrate ethical principles enables you to deploy responsible AI at scale. 

 

Recently, there have been a range of alarmist news stories about ethics in technology and AI. Examples of these include stories ranging from “Big Tech” to AI experts or better regulation to allow standards and legislation to keep up with the pace of innovation. Some have even warned that recent developments are moving us closer to Artificial General Intelligence (AGI) which raises unprecedented questions for humanity. 

Researchers and organisations are committed to building, using, and improving AI tools to capture these benefits, and they are becoming increasingly aware of the need to do so responsibly. Furthermore, consumers and are receptive to the potential benefits of AI, but also cautious about its risks.  

 

Strategies for building responsible AI 

To support this drive towards responsibility, there are four essential strategies that we recommend organisations and experts follow to build more ethically conscious AI tools and use them responsibly. These include actions that you can take as you are developing your own AI systems, or evaluating the deployment of third-party tools for your own business cases.  

 

1. Co-design AI systems with subject matter experts, ethicists, and legal experts 

Technology development and deployment always occurs in a social context, but this isn’t always sufficiently considered. Specifically, social relations influence which technologies are developed, marketed, and used, and the resulting use of technology impacts society and social relations. This intermingling requires you to adopt a “sociotechnical” or SocioTech approach to developing and releasing AI tools. This means integrating a multidisciplinary group of experts, often including data scientists, philosophers, ethics and legal experts, and other subject matter experts, as part of your product design and development teams. These ethicists, lawyers and other relevant subject matter experts work alongside the technical team. They develop their own user stories, translate these into features and functionalities and submit them to your Product Management Committee (or other relevant internal function) alongside other technical requirements. These ongoing discussions also facilitate the development of training materials that will support users in understanding these ethical issues and how they should be considered when the tool is being deployed. 

 

2. Understand and mitigate bias and the limits of data 

Bias occurs where results or outputs from an AI system produce disproportionate results or outputs in relation to an idea, group, or person. Bias can occur at any point in the AI life cycle, from the data upon which the model is trained to the emergence of proxy variables within the model, to the interpretation and application of the algorithmic results in context. Decisions made with algorithms trained on data that is incomplete or that represents particular categories better than others can result in some groups experiencing disproportionate benefits or harmful consequences.  

To identify and mitigate bias in an AI tool, your AI development teams should map potential bias in each step in the AI development lifecycle, from the planning and assessment phase to data collection and processing, model training and validation, and deployment of the tool. In the planning and assessment phase, subject matter experts, sociologists, ethicists, and legal experts as well as data scientists should be part of the planning exercises to identify what types of bias are likely to feature in the dataset. This will depend on the context in which the data was collected and the tool that is going to be used. Explainability tools like histograms and Shapley diagrams can be used to identify which variables are contributing to the model’s output. At this stage, developers often control for variables known to contribute to bias in the specific context (e.g., gender, age, etc.) to ensure the model performs similarly across different categories. With respect to the algorithmic outputs and their use in context, human assessment and analysis is key. Your nd-users need to be supported to understand the potential for inheriting underlying biases from historical data both for individuals and populations more broadly. Algorithmic transparency and explainability tools can support end-users in understanding the impact of each variable on the output of the algorithm. These insights explain how the algorithm works with the body of data it was trained upon. Enabling this type of explainability empowers your users to assess whether the model’s output aligns with their professional judgement of the situation.   

 

3. Make systems appropriately transparent and explainable for their context of use 

Transparency and explainability provide relevant information about AI systems and present this information accessibly for those who need access to it, whether they are engineers, end-users, or affected parties. They help to answer the questions:  

How does an algorithm work and how does it produce its output?  

Why, as a user of a tool, am I seeing this particular output and why is it significant to me and my decision making?  

As a starting point, developers need to support transparency about the data collected and processed, the scope of the tool, the processes to build it, and the revision and governance procedures in place. Data collection, data labelling and the processes used in an AI system should be documented clearly and comprehensively to allow for audits, updates, and revisions.  

In the context of operation, transparency and explainability also support the operational performance of the tool by delivering high-quality presentation of the insights, and consequently, high-quality end user interaction with the tool. Explainability can be achieved in different ways. One option is to present your user with a list of all the features used by the algorithm as well as a description of how each feature is calculated. This visualisation provides users the ability to see how an algorithm performs differently for categories individually or in combination (e.g., older people and women as well as older women). 

To support the high-quality interpretation of this information, user manuals and training materials should explain the intended uses and benefits as well as clear limitations of each algorithm. Training materials help the users understand what the model does and does not do; understanding the limitations facilitates confidence in the users to scrutinise the algorithmic outputs. 

 

 4.Create the right accountability and governance structures 

In this context, “Accountability” refers to an expectation that individuals or organisations take ownership of their actions or conduct, and they explain reasons for which decisions and actions were taken. When mistakes or errors are made, it also implies taking action to ensure a better outcome in the future. To achieve these ends, developers should establish mechanisms and procedures to ensure responsibility and accountability for AI systems during development and in use. Putting clear lines of accountability in place helps promote transparency and trust. It does so by building confidence among clients and members of the public that AI products and services are planned and monitored, that someone is responsible for the ethical development of the tools, and that one can request an explanation for particular development choices.  

In the context of Responsible AI, accountability refers to the expectation that organisations or individuals will ensure the proper functioning, throughout their lifecycle, of the AI systems that they design, develop, operate or deploy. However, accountability must integrate a “shared responsibility” mechanism between developers and operators, since neither can be fully responsible for protecting fundamental rights across the AI system lifecycle.  

Many existing corporate and IT governance and data management best practices already support accountability, including clarifying roles and responsibilities for ensuring responsible AI. Organisations can consider creating specific functions or roles with accountability for AI, such as AI Officers, or appointing AI Ethics Boards, either of which would be responsible for guiding the development and use of AI. Regular assessment and oversight are essential to ensure accountability across the system’s deployment lifecycle. Regular algorithmic audits and other internal assessments will ensure that a system is compliant with existing and emerging AI regulations and standards, including frameworks on robustness, fairness, and trustworthy AI. Essential to the process is aligning with a concrete framework aligned with standards and legislation that can demonstrate compliance and implementation of good practice, and these are emerging internationally via standards bodies such as CEN, ISO, and IEC. 

Implementing “shared responsibility models” support the varied aspects of accountability, including obligations on the system developer and obligations on the operator. Under this model, the developer monitors the performance of the tool, ensures data security for the cloud environment and internal systems, maintains quality and provides operational support. The operator must ensure the responsible use and management of the tool in a business-as-usual environment, including developing internal policies and procedures for the use of the tool, complying with all applicable laws and legislation, investigating the potential for operational biases, and explaining to affected parties how the tool has informed decision-making.  

 

Responsible AI: Stay up to date 

The above strategies provide your organisation with a framework to meet good practice requirements in responsible AI, enabling you and your end-users to capture the benefits of AI. However, this is a rapidly developing space and you should invest in staying up to date on new developments, requirements and innovations that support fair, responsible, and trustworthy AI systems.   

If you’d like to find out more about responsible AI and how Trilateral Research can support you, get in touch.

 

Related posts