Frontier AI: Heading safely into new territory

Reading Time: 4 minutes
Frontier AI heading safely into new territory

Authors:  

Tim Jacquemard | Senior Research Analyst
Trilateral Research |

Date: 1 November 2023

AI is having a breakthrough moment. The technology is rapidly shaping into a transformative force used within countless industries and affecting millions of people daily. As an important enabler, frontier AI is at the forefront of this transformation. The proliferation of AI applications built on frontier AI models raises questions of ethical and social relevance. How do we ensure that everyone benefits from these AI models? What harms can we expect from AI and who is most likely to suffer these harms? Trilateral researches the social, legal, and ethical impacts and advises industry, developers, and policymakers on how to develop and deploy ethical frontier AI. Below, we explain the concept ‘frontier AI’, and explain why we stress the importance of global efforts, such as by the UK government’s focus on the topic at their AI Safety Summit 1-2 November 2023, to develop ethical frontier AI.  

Frontier AI 

Frontier AI’ is a catch all term used to describe AI models that match or outperform existing cutting-edge AI models either in terms of capabilities or variety of tasks. At the moment, ‘frontier AI’ means foundational models or general-purpose AI (GPAI). In contrast to narrow AI systems, these systems can perform a wide range of tasks including language and image processing. They often function as a type of platform, which other developers can use to build applications on. Thus, the most advanced of these AI models are classified as frontier AI. 

‘Frontier’ does not describe a property of an AI model; it emphasises how it compares to existing AI. Today’s frontier models may cease to be considered ‘frontier’ when they are outperformed by other AI models. For example, the new version of OpenAI’s well-known multimodal large language model GPT-4 is faster and more accurate than its predecessor, creating a new frontier for AI.    

Due to their unprecedented technical capabilities and computation, and the wide variety of tasks they can perform, frontier AI also engenders uncertainty. Part of that uncertainty is their potential to cause harm to people’s lives and well-being, or the environment.   

  • AI tools can have positive and negative impacts. For example, new AI tools can help support researchers with scientific discovery. However, new discoveries can also reveal new ways of causing harm. For instance, AI models for drug discovery can also be deployed to develop new biochemical weapons 
  • People wishing to do harm to others may obtain / gain access to a powerful tool with frontier AI. For example, although terrorist organisations have traditionally employed “low-tech” weapons such as firearms and vehicles, they are increasingly using AI tools. Since frontier AI tools do not require much or any technical expertise to use, they can be used by almost anyone for a variety of purposes, including malicious ones.  
  • The applications built upon frontier AI may mirror the limitations of the foundational model they are built upon. For example, large language models are often designed in English and work better in English than in other languages. If many applications are built on top of a flawed foundational model, they may perpetuate this flaw.  

Responsible and trustworthy AI (RTAI) 

To prevent these harms and to reap the benefits of AI systems, we will need ethical AI or, in other words, trustworthy and responsible AI. To be trustworthy, an AI system must not violate the rights and interests of its end-users or affected parties making it worthy of their trust. Responsible in this context requires all stakeholders in the AI system’s lifecycle to be accountable for ensuring its ethical development and use.  

The need to develop trustworthy and responsible AI models has been recognised globally. Many governmental and intergovernmental organisations, corporations, NGOs, and academics have formulated frameworks for responsible and trustworthy AI, including the Organization for Economic Co-operation and Development (OECD), the European Union (EU), and United Nations Educational, Scientific and Cultural Organization (UNESCO). Governments from different parts of the world have committed to these principles, including the UK, most EU countries and other countries within Europe, the Americas, Africa, Oceania, and Asia. Global recognition of these principles is essential, as frontier AI will not be limited to one jurisdiction or to one region. For these reasons, global approaches, such as the aforementioned AI Safety Summit, are needed. 

The Core Principles and Opportunities for Responsible and Trustworthy AI 

Commissioned by Innovate UK and BridgeAI, we have developed a single and common frame of reference on core principles, key innovation priorities and new commercial opportunities relating to responsible and trustworthy AI, including frontier AI, in the UK.  

 The report identifies six high-level core principles for responsible and trustworthy AI systems:  

  • Appropriate transparency and explainability: relevant information about AI systems must easily be accessible and presented in a way that is comprehensible to stakeholders including developers, end-users, and affected parties. 
  • Safety, security, and robustness AI systems need to reliable function as intended, both in testing stages and in real-world settings. 
  • Non-maleficence: AI systems should do no harm against individuals, communities, society at large and the environment. Harm includes violations of human dignity and human rights as well as of mental, physical, and environmental integrity and well-being. 
  • Privacy: AI system should be built to protect the right to limit access to, and augment a person’s control over, personal information. Data protection and cybersecurity regulations and best practices help protect the right to privacy. 
  • Fairness and justice: AI systems need to be developed to protecting equality and equity and promoting non-discrimination. AI systems must be developed in compliance with existing laws, human rights, and democratic values. People should receive fair compensation for the work required to build, train and maintain AI systems, including labelling datasets or flagging content. 
  • Accountability: Individuals or organisations along the lifecycle of the AI system need to take or be assigned responsibility for their actions and provide accessible avenues for contestability and redress when harms or errors resulting from AI systems occur. Finally, companies that develop and deploy AI ought to establish a shared responsibility model with end-users. 

In addition to presenting a single and common frame of reference on these core principles, the report identifies key innovation priorities, new commercial opportunities and policy and standards development relating to responsible and trustworthy AI (RTAI) for the UK. It creates a shared language to more easily communicate commercial innovation opportunities to stakeholders in the industry. It also establishes a framework of RTAI focused on maximising societal benefits and protecting fundamental rights. This report identifies and evaluates key steps for the UK to lead in RTAI by providing a prioritisation of opportunities to inform future investments in research, innovation and policy/standards development to achieve the economic and societal benefits from RTAI, including frontier AI, in the long term. It concludes that the clearest opportunities for innovation, market capture and policy/standards development can be found in AI assurance, sustainable AI and the sociotechnical development of AI systems. As a result, this report can help advance the UK agenda regarding frontier AI.  

You can read more about the report here, and learn more about our work in RTAI here. 

Related posts