The UK’s new vision for responsible and trustworthy AI

Reading Time: 5 minutes

Authors:  

Trilateral Research |

Date: 17 November 2023

UKRI InnovateUK logo
BridgeAI logo

At Trilateral Research, we turn the promise of ethical AI into reality, tackling complex social problems head-on. But what does it take for AI to be “ethical”? Our latest report, commissioned by Innovate UK and BridgeAI, outlines an answer to this question for the UK.

The ‘Report on the Core Principles and Opportunities for Responsible and Trustworthy AI’ provides an actionable framework that indicates core principles, key innovation priorities, new commercial opportunities and policy and standards development relating to responsible and trustworthy AI (RTAI). It’s a strategic vision that goes beyond defining the principles that RTAI must fulfil. It also examines how various actors should contribute to creating a sustainable and well-functioning ecosystem that supports RTAI in the UK and abroad. 

Principles of responsible and trustworthy artificial intelligence 

The UK’s vision for ethical AI is grounded in six RTAI principles: 

  • Appropriate transparency and explainability: AI systems should be documented so that stakeholders can understand how they work. 
  • Safety, security and robustness: AI systems should work as intended, being secure from cyber-threats (e.g. data poisoning) and safe for end-users and other affected parties. 
  • Non-maleficence: AI systems should not harm people, society at large or the environment, even if misused. This includes protection for human rights. 
  • Privacy and data protection: AI systems should empower individuals to control their personal data. 
  • Fairness and justice: AI systems should promote equality, equity and non-discrimination. Special attention should be paid to algorithmic bias, intellectual property and the exploitation of data-labellers and other workers. 
  • Accountability: People and organisations must take responsibility for actions and outcomes that are influenced by AI systems.  

This vision for RTAI requires attention across all stages of the AI lifecycle. AI stakeholders must be committed to being responsible and facilitating trustworthiness in the products they develop, procure and use. These stakeholders include executives, developers, end-users, other affected parties (i.e., people who don’t use or make an AI system but are affected by its decisions), as well as social and environmental experts, lawmakers, regulators and thought leaders. The report focuses on recommendations (a) for innovators and funding bodies, (b) for commercial companies and (c) for lawmakers, regulators and standards bodies. 

Accelerating change with innovators and funding bodies 

Our report highlights several innovation opportunities that could catapult the UK to the forefront of RTAI. To unlock the economic and societal potential of RTAI, the UK can drive the implementation and further development of the RTAI principles through research and innovation funding. This investment could help firms and thought-leaders to gain critical advantages by crafting ground-breaking applications of RTAI principles. 

Despite the wealth of high-level guidance on RTAI, concrete guidance on how to innovate around RTAI principles is needed. Specifically, practical steps for developers, scientists, managers and organisations that procure AI systems. For example, there are many technical explainability tools that help developers, but few help those who are impacted by AI.. The report reveals a significant opportunity for the UK to spearhead sociotechnical and environmentally sustainable methods and processes. Other principal innovation opportunities derived from RTAI principles include: 

  • Obtaining, refining or synthesising responsible and trustworthy data; 
  • Fundamentals for AI assurance such as metrics, assessment methodologies, testing infrastructure and monitoring tools; 
  • Sustainable technology for AI that makes responsible use of energy, water and rare materials; 
  • Sociotechnical knowledge-sharing, education for laypeople and career paths for experts on how to think critically about both technical and social/ethical aspects of AI technology; 
  • Examples of successful RTAI, such as the OECD toolkit and the CDEI portfolio of assurance techniques. 

Catalysing growth within commercial companies 

The report reveals a wealth of commercial opportunities for private companies. AI systems will revolutionise how companies generate insights, optimise processes, and scale their services. There is a positive correlation between RTAI investment and revenue growth. But AI will bring more than efficiency, it will bring growth. The UK’s AI spend is predicted to surge from £30 billion by 2025 to over £80 billion by 2040. Beyond the bottom line, AI systems that incorporate RTAI promise to build public trust while contributing to CSR practices and ESGs and complying with international regulatory regimes.  

Beyond developing or using AI, the UK has a clear opportunity to craft a robust market for AI assurance , offering tools and services for assessing, auditing and certifying RTAI systems. The UK is well-suited to become a leading exporter of AI assurance services due to its strong higher education system and AI industries. Seizing this opportunity will help achieve long-term economic and social benefits in the UK and will net the UK a significant share in the international RTAI sector. Despite current fragmentation, this sector can be bolstered through action by government and industry bodies such as standards organisations. 

The report’s key findings for industry include: 

  • Developers should implement RTAI principles in existing and new AI systems to increase revenue and public trust. 
  • Industry actors should exploit commercial opportunities where RTAI is underused. There are significant market opportunities in recruitment, quantum computing and synthetic data. 
  • Industry actors should develop and market tools and services that facilitate RTAI and RTAI assurance. 
  • Regulators should address AI assurance market fragmentation. 

Outlining pathways for lawmakers, regulators and standards bodies 

For innovators and companies to find niches in a healthy and globally-competitive RTAI ecosystem, they require support. The report surveys UK regulations and standards to provide policymakers with a single reference source to help them create an RTAI ecosystem in the UK. The UK Government has already staked out a decentralised and pro-innovation approach to AI regulation, supported by National AI Strategy and an AI Roadmap. 

 This approach includes key investments in education, innovation, regulation and international cooperation. The UK can support the RTAI ecosystem domestically by implementing this Roadmap. The Government can also empower regulators by setting the RTAI principles on a statutory footing and ensuring that regulators are adequately funded (especially cross-sector regulators such as ICO). 

The UK can facilitate the export of UK products and services and develop its global leadership in RTAI by identifying opportunities for alignment between UK policies and international initiatives. The six RTAI principles described above are compatible with existing UK relationships to the USA, OECD, UN (including UNESCO) and the EU.  

In order to maintain its access to EU markets, the UK will need to continue to monitor changes to the EU AI Act paying special attention to: 

  • liability throughout the AI lifecycle and supply chains,  
  • assessing and mitigating the environmental footprint of AI systems and  
  • the role of standards in the AI assurance market. 

Standards organisations and industry groups can continue to develop and agree on metrics for compliance, and in particular on technical and measurement standards for AI assurance techniques and services, such as thresholds for bias audits and impact assessments. As participants in CEN and CENELEC, BSI representatives contribute to developing European standards which means UK experts will be able to influence implementation of guidelines for international trade to the EU. 

Other key recommendations for UK policymakers and regulators are to: 

  • devise and implement policies for sustainable AI, including considerations regarding the environmental footprint of different AI systems; 
  • ensure that regulators are sufficiently empowered and adequately resourced to implement the proposed AI regulatory framework; 
  • strengthen the proposed AI regulatory framework by creating a responsibility and liability framework for demonstrating compliance with AI regulatory principles, applicable to all AI lifecycle actors. 

This report enables policymakers, funding bodies and industry stakeholders to position the UK as a global leader in the critical and fast-paced area of RTAI. The opportunities to steer these systems to be responsible and trustworthy are already clear. By igniting innovation, seizing commercial opportunities and establishing regulatory consistency, the UK stands ready to leverage the social and economic benefits of responsible and trustworthy AI. 

You can read more about the report here, or visit our website to learn about our work in RTAI

Related posts