Establishing AI Governance under the AI Act

Reading Time: 5 minutes

Authors:  

Dr Zachary J. Goldberg | Ethics Innovation Manager

Date: 22 April 2024

EU Act Approved: Obligations for Organisations 

On March 13, 2024 the European Parliament approved the Artificial Intelligence Act (AI Act). Twenty days after the final text of the Act is published in the Official Journal of the EU, which is currently expected in May or June 2024, the EU AI Act will enter into force. The Act will be fully applicable within two years of its official publication (with some exceptions, e.g., obligations related to general purpose AI will apply after 12 months).  

The landmark regulation aims to protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, and seeks to boost innovation and establish Europe as a leader in the field of responsible and trustworthy AI. To achieve these ends, the Act establishes obligations for AI, based on its potential risks and level of impact. However, given the wide variety of AI tools, both in terms of diversity of algorithmic design and differing contexts of use, identifying potential risks and the level of impact can be onerous for organisations. Understanding whether and when the Act’s obligations apply to the development or deployment of a particular AI tool requires legal, ethical and technical proficiency. This article identifies key steps that organisations can take to develop this proficiency via AI governance. Establishing AI governance within organisations involves establishing policies, procedures, and practices to ensure that AI systems are developed, deployed, and used in an ethical, responsible, and compliant manner. Doing so can help organisations capture the benefits of AI at scale and provide strong protection for clients.  

Background 

AI governance is a collection of frameworks, policies, and best practices that serve as guardrails to ensure that AI technologies are developed and deployed in ways that minimise potential risks and maximise intended benefits. Though the EU and its standard bodies aim to publish more concrete guidance for AI governance as the regulatory landscape evolves, several useful resources already exist.  

The AI Act cites the guidelines for trustworthy AI from the EU’s High Level Expert Group on AI as a basis for drafting voluntary codes of conduct (recital 14a and article 69), and encourages the development of voluntary best practices and standards for all AI systems. These guidelines are accompanied by an ‘Assessment List for Trustworthy Artificial Intelligence’ (ALTAI). The ALTAI self-assessment framework is a helpful resource for organisations seeking more information on how to understand risk and the level of impact from AI tools. Additionally, the NIST AI Risk Management Framework (AI RMF) aims to help organisations to better manage risks to individuals, organisations, and society, associated with AI. It is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. ISO/IEC 42001 is an international standard that specifies requirements for establishing, implementing, maintaining, and continually improving an Artificial Intelligence Management System (AIMS) within organisations. It is designed for entities providing or utilizing AI-based products or services, ensuring responsible development and use of AI systems. 

How to Implement AI Governance 

Although these documents are helpful starting points, translating their content into practical steps can be challenging. The following steps present guidance on implementing AI governance policies and procedures in an organisation.  

Establish Clear Governance Structures: 

  • Designate a cross-functional team or committee responsible for overseeing AI initiatives and ensuring compliance with governance policies. 
  • Define the roles and responsibilities of key stakeholders, including data scientists, product managers, developers, ethicists, legal experts, and business leaders, in the AI governance process. 
  • Ensure that there is clear communication and coordination among different departments and stakeholders involved in AI projects. 

Develop Ethical Guidelines: 

  • Articulate ethical principles and guidelines that reflect the organisation’s values and commitments to fairness, transparency, accountability, and privacy. 
  • Consider risk management and ethical frameworks such as those listed above in the Background section. 
  • Provide concrete examples and scenarios to help employees understand how ethical principles apply to real-world AI applications within the organisation. 

 Implement Risk Management Processes: 

  • Conduct comprehensive risk assessments to identify potential risks associated with AI systems, including technical risks (e.g., bias, data quality issues), operational risks (e.g., security breaches, compliance failures), and societal risks (e.g., social inequality, job displacement). 
  • Prioritise risks based on their likelihood and potential impact on the organisation and its stakeholders. 
  • Develop risk mitigation strategies and controls to address identified risks, such as data validation processes, bias detection algorithms, and impact assessments. 

Ensure Compliance with Regulations and Standards: 

  • Stay informed about relevant laws, regulations, and industry standards governing AI technologies in the organisation’s jurisdiction and industry. 
  • Design AI systems to comply with legal and regulatory requirements, such as data protection laws (e.g., GDPR), consumer protection laws, and sector-specific regulations (e.g., healthcare, finance). 
  • Regularly review and update AI governance practices to adapt to changes in regulations and standards.

Promote Transparency and Accountability: 

  • Foster a culture of transparency by providing clear documentation of AI systems’ design, development, and decision-making processes. 
  • Implement mechanisms for explaining AI-driven decisions to stakeholders, such as explainable AI techniques and user-friendly interfaces. 
  • Establish accountability mechanisms, such as clear lines of responsibility and consequences for non-compliance, to ensure that individuals and teams are held accountable for their actions.

Invest in Data Governance: 

  • Develop data governance policies and procedures to ensure the quality, integrity, and security of data used in AI systems. 
  • Implement data management practices, such as data cataloguing, data lineage tracking, and data access controls, to maintain data quality and privacy. 
  • Establish data governance roles, such as data stewards and data custodians, to oversee data management processes and enforce compliance with data governance policies. 

 Foster a Culture of Responsible AI: 

  • Provide ongoing training and education to employees about AI ethics, governance, and best practices. 
  • Encourage open dialogue and collaboration among teams working on AI projects to promote ethical decision-making and responsible use of AI technologies. 
  • Recognise and reward employees who demonstrate a commitment to responsible AI practices and ethical conduct.

Regularly Evaluate and Improve Practices: 

  • Establish key performance indicators (KPIs) and metrics to measure the effectiveness of AI governance practices, such as compliance rates, risk mitigation outcomes, and stakeholder satisfaction. 
  • Conduct regular audits and reviews of AI systems and governance processes to identify areas for improvement and address emerging risks and challenges. 
  • Solicit feedback from stakeholders, including employees, customers, and external experts, to gather insights and perspectives on AI governance practices and identify opportunities for enhancement. 

By following these steps, organisations can establish a robust AI governance framework that promotes ethical, responsible, and compliant use of AI technologies, position themselves as leaders in responsible and trustworthy AI and capture the benefits of AI at scale. Furthermore, while new guidance and standards emanating from the AI Act have yet to be published, organisations can use these steps to get a head start on their journey to compliance and ethical best practice. As it will take time for an organisation to fully implement these steps and turn them into business-as-usual practice, we recommend that organisations begin to plan and implement their AI governance steps now. For information on how you can build an AI governance framework within your organisation, or to learn about how our STRIAD:AI Assurance solution can help you, stay tuned to this newsletter or contact our team for more information.   

Related posts

Let's discuss your career