Who is responsible for artificial intelligence governance?

Reading Time: 3 minutes

Authors:  

Dr Rachel Finn | Director, Data Protection & Cyber-risk Services / Head of Irish Operations

Date: 14 June 2024

Artificial intelligence governance is essential for using AI tools safely and responsibly. However, the question of who to hold accountable is less clear. 

AI governance is a shared responsibility between researchers, regulators, AI developers, and a host of other stakeholders, including the organisations that buy and use AI systems day to day.   

We focus on these organisations throughout this blog, exploring who is responsible for the governance of AI and our advice for making it a shared goal across the business. 

 

Who is accountable for AI governance within organisations?  

Any organisation that engages with AI systems is responsible for governing them and is also at risk of being sanctioned for non-compliance. If we consider the EU AI Act as an example of a governance framework, non-compliant organisations could face fines of up to €35 million. 

However, within any organisation, there could be multiple business units with their own AI tools and use cases. There could also be multiple AI ‘roles’ in any one organisation. Some business units may deploy AI, while others may develop it.  

With this in mind, an effective way to ensure the governance of AI across an organisation is to engage multiple stakeholders and build internal accountability. Here are four tips to help you.  

 

1. Create a responsible AI culture   

Consider building a culture of employees who understand both the importance of AI governance and their role in upholding it. This will help you to create a self-governing team – one that checks your AI systems regularly, challenges any questionable AI outputs, and raises issues quickly.  

Our Director of Data Protection and Cyber-risk Services, Rachel Finn, recently shared how compliance teams can create a responsible AI culture. You can read her advice in full in our Knowledge Library.  

 

 2. Build accountability with AI champions  

For every business unit that interacts with AI systems, consider appointing an AI champion. This should be someone that the rest of the team can raise queries with, and who is ultimately responsible for checking that each AI system is being used in line with the organisation’s policies and governance frameworks. 

 

3. Appoint a senior AI Officer  

Senior AI positions are on the rise, with the number of companies appointing a head of AI role tripling globally in the past five years.  

Not every organisation will have the resources to hire a dedicated AI Officer. However, appointing someone to oversee the use of AI across the entire organisation can help to build a more cohesive approach to governance. 

 

4. Build a shared responsibility model 

Organisations can have different AI governance obligations, based on how they engage with AI systems. For instance, under the EU AI Act, organisations’ responsibilities can depend on whether they develop, provide, distribute, or use AI systems. 

Ultimately, each actor is accountable for meeting these obligations, regardless of their role. However, it can be helpful to develop a shared responsibility model – a framework for confirming the responsibilities of each party involved. This might be between AI developers and the organisations that buy their systems, for example.  

With this model in place, you can spread the necessary AI governance between multiple actors.  

 

Who is responsible for creating AI regulation?  

We have discussed the responsibility of businesses to govern their own AI systems, but who are businesses accountable to? Who creates the artificial intelligence governance frameworks that your organisation follows?  

This also requires multiple stakeholders’ involvement. Some of your governance frameworks might be based on your organisation’s internal best practices, but others will be created by governments and economic unions, such as the EU’s AI Act.  

Standards organisations often support these accountability frameworks. For instance, the EU AI Act had input from three European Standards Organisations (ESOs) – CEN, CENELEC, and ETSI – alongside National Standards Bodies (NSBs), private consultants, and more 

International standards also exist, such as ISO/IEC 42001 for AI management systems, which are developed with insight from sector experts.  

This type of interdisciplinary approach is one we champion at Trilateral Research, and is one that we apply to our own research and innovation. 

 

Simplify AI governance with Trilateral Research 

Governing your AI systems effectively is an ongoing process, particularly with new regulation – like the landmark AI Act – emerging. To help you prepare for this legislation, you might find the following guide helpful: Establishing AI governance under the AI Act.

Related posts

Let's discuss your career