Compliance teams, here is how to create a responsible AI culture

Reading Time: 3 minutes

Authors:  

Dr Rachel Finn | Director, Data Protection & Cyber-risk Services / Head of Irish Operations

Date: 15 May 2024

Compliance professionals are facing new pressures. For one, the EU AI Act will soon regulate how organisations build, provide, and use AI tools. In tandem, AI tools are proliferating across departments, from customer service to accounting. 

How can compliance teams effectively govern tools that are not only becoming more regulated, but that they are not solely responsible for? There are many components to effective AI governance. However, a responsible AI culture – an interdepartmental understanding of how to use AI safely, legally, and with accountability – is a strong starting point. 

Here is some advice for building one. 

A responsible AI culture offers commercial opportunity 

Using responsible and trustworthy AI (RTAI) is not just about doing the right thing; it has commercial benefits. As we explore in Report on the Core Principles and Opportunities for Responsible and Trustworthy AI, remaining compliant is just one of these. Others include generating better insights, building trust among customer bases and scaling revenue.  

To put it simply, using AI tools responsibly can help business units achieve their individual objectives. When you demonstrate these commercial benefits to each unit, you can start to create a culture of people who are motivated to get the best from their AI tools.  

Three tips for building an RTAI culture  

Define what a responsible AI culture looks like for you 

The first step in creating a responsible AI culture is mapping what it looks like for your organisation. Often, organisations create generic policies to support compliance, but they become too big, too vague, and too unwieldy. 

This can make it tough for employees to understand how to apply it to their job role, on a day-to-day basis. If they do not understand their AI responsibilities, how can they stay accountable for their role in the wider culture? 

For compliance teams in any organisation, the first step is to define what responsible AI looks like in context, and then shape the organisational culture appropriately:  

  1. Think about who your stakeholders are 
  2. Consider the context in which your business is operating 
  3. Assess how your use of AI could impact your stakeholders 
  4. Put measures in place to address those specific risks  

Every organisation has a set of internal values, best practices, and brand promises – so use these as your starting point to identify what your culture needs to reflect. 

Instigate an internal skills uplift 

How AI literate is your organisation? Do the teams that use AI tools as part of their role understand why they are using them, or the principles that make them responsible? Do they know which AI governance frameworks to follow, or how to remain compliant? 

Investing in an internal skills uplift will soon become part of the compliance team’s role. Under the EU AI Act, organisations that provide or use AI tools will need to have a baseline of AI literacy – so now is the time to invest in company-wide training. 

Training could also help you manoeuvre another culture challenge – an increasingly widespread fear of AI. With research from Accenture finding that almost 60% of employees are concerned about losing their jobs to AI, reskilling employees can help to reshape this view – positioning AI instead as a helpful co-pilot. 

Build internal accountability 

AI is adaptive, so checking your tools regularly to make sure they are doing what they set out to is critical. If your AI takes a right turn, it could end up with unintended consequences – not just legally, but socially too.  

That is why AI tools require human intervention. Someone needs to be accountable for AI’s output, so that business units can innovate confidently (and the weight of continuously checking does not fall upon the compliance professional’s shoulders).  

The number of companies with a head of AI has nearly tripled across the world in the last half a decade, with the term “Chief AI Officer” becoming more and more mainstream. Nevertheless, you do not need to hire a CAIO to fill this accountability gap. It just needs to be someone who can make informed decisions around the use of AI tools, and who can give the green light to each new application. 

Ensure ongoing AI assurance for the whole company 

One of the curses of the compliance role is that there is always something new to assess. Yet with a culture of people who are motivated to alleviate this pressure, and the tools to support them, this responsibility can be shared. 

One such tool is STRIAD:AI Assurance. Our all-in-one AI governance platform provides a single point of reference across the organisation’s AI solutions and their compliance requirements. It keeps all units accountable for their role in risk reduction, so that compliance can oversee – rather than shoulder – the organisation’s use of AI tools. 

If you’d like to learn more about RTAI for your organisation, you might find value in Making AI responsible: What do you need to know? 

You can also keep up with me on LinkedIn for more insights. 

 

Related posts

AI is rapidly transforming industries. Take for example the legal field, which is traditionally conservative in relation to technology. More…

Let's discuss your career