Chat-GPT has propelled artificial intelligence (AI) to the fore of public debate. The popularity of the ground-breaking chatbot has accelerated an arms-race in the technology sector to develop new goods and services and to enhance existing software products with AI capabilities. All organisations that use software from third party vendors embedding this functionality into existing offerings will need to update their compliance framework and take steps to comply with emerging and existing regulations. This article addresses the ways in which third-party software vendors are using artificial intelligence to augment existing software offerings for common business functions and suggests practical steps to improve compliance.
Keeping Up with AI
Enterprise application software providers are racing to enhance existing tools with capabilities powered by artificial intelligence. For example, market-leading software brand Intuit Mailchimp recently added features that generate automated content optimised for industries and audiences. It joins a suite of AI-powered capabilities that direct marketing messages at customers based on their behaviour, activity and traits. Whereas twenty years ago Mailchimp’s offering consisted of a basic newsletter tool, the company is using machine learning and related technologies to leverage data in its services.
AI is also transforming human resources workflows. In 2017, 40% of the human resource functions of international companies used AI-enhanced software services. Because AI can reduce time spent on repetitive inefficiencies, and because it leverages data to make decisions, its value proposition is efficiency and objectivity of hiring.
However, there are risks. In 2017, for example, Amazon discontinued an automated recruiting experiment that penalised keywords relating to women (for example, women’s rugby captain). Since computer models were trained to assess patterns based on the resumes of previous successful candidates, and since past technical candidates were predominantly men, the recruiting tool learned to sanction resumes for technical roles indicating a female candidate. While one argument in favour of AI is that it removes human potential for unconscious bias, in this instance bias in the data on which the AI was trained was replicated in the software’s function. Ultimately the tool undermined both business objectives and candidates’ rights to non-discrimination.
As software products embed AI capabilities, organisations must take a fresh look at their compliance measures to ensure that they are fit for purpose. This means revisiting existing risk assessments and safeguards to account for any processing and compliance risks unique to, or exacerbated by, AI. All organisations will need to have regard to existing legislation, such as the General Data Protection Regulation (GDPR). They will need to verify that technologies do not have an adverse impact on society, with particular emphasis on marginalised and vulnerable groups. Existing duties on public sector organisations to eliminate discrimination and advance equality of opportunity continue to apply to the impacts of new technologies.
Regulating AI: Where we are, where we’re headed
The General Data Protection Regulation (GDPR)
While the GDPR does not mention artificial intelligence directly, it does regulate automated processing. The GDPR prohibits decisions based solely on automated processing, including profiling, that have significant effects for individuals. Organisations using AI tools in a decision-making capacity should ensure that its use has an appropriate lawful basis, inform the public of its existence, and carry out regular system checks for accuracy and bias. Since the GDPR prohibits decision-making, including profiling, based solely on automated processing, organisations should ensure meaningful human review and enable individuals to challenge automated decisions.
The ICO updated its extensive guidance on AI and Data Protection on 15 March 2023, including new content on DPIAs, special category data, explainability, transparency, accuracy, fairness.
European Union (EU) AI Act
The first-of-its-kind legislation will, when approved, establish a harmonised approach to artificial intelligence across the EU. Its risk-based approach identifies four levels of risk to health, safety or fundamental rights: unacceptable, high, limited and minimal risk AI. While limited risk AI systems are to be subject to a limited set of obligations, high-risk AI systems will be subject to various mandatory requirements. Applications deemed to have an unacceptable risk, such as emotion recognition in employment or educational settings, will be banned. Organisations that will use AI should remain alert to new transparency requirements for AI generated content, the need to facilitate citizen rights to receive explanations of decisions based on high-risk AI systems, and the impact of the EU AI Office. Additionally, users of high-risk AI systems, such as those focused on employment, healthcare or other essential services, will have to monitor the operation of their AI systems for risk and suspend their use where it occurs.
United Kingdom (UK) Government Initiatives
Targeting government department and public sector bodies, the Cabinet Office’s Central Digital and Data Office has developed a national standard for algorithmic tools, the Algorithmic Transparency Recording Standard. The standard addresses public bodies deploying products, applications or device that use algorithms, whether developed in-house or purchased from third parties. It applies to tools that either directly interact with the public, like chatbots, or assist decision-making processes that materially affect individuals or groups, or impact eligibility for or receipt of access to services. The standard promotes two-tiered transparency. The first tier requires that organisations give a general description of the tool’s functions, operation, and justification. Tier two is a detailed account addressing accountability for development and deployment, including its rationale for implementation, role in decision-making process, and risk mitigation measures and impact assessments.
Public sector bodies may also have regard to additional sources of guidance. For example, the Data Ethics Framework, which outlines a set of principles governing use of AI, like transparency, accountability and fairness. The Technology Code of Practice guides the technology design and procurement.
Three steps to improve GDPR compliance
Until the EU AI Act is approved, the GDPR is the main source of obligations on organisations using AI, Data controllers must not only comply, but be able to demonstrate compliance with the GDPR. As third-party software providers begin integrating AI capabilities into software previously purchased, organisations need to revisit the DPIA and other documentation. New capabilities require accounting for both the benefits and risks of new capabilities and emerging compliance requirements.
Step 1: Gather information
Purchasing compliance tools from external suppliers does not nullify the accountability and transparency obligations imposed on data controllers. Organisations should seek to acquire information necessary to support compliance, including sufficient details to describe the processing activity, data flows, statistical accuracy and automated decision-making. The Algorithmic Transparency Standard recommends that organisations seek information from suppliers necessary to provide granular information about a tool, logic, rules and the roles and responsibilities of actors in complex supply chains.
Step 2: Reconsider data protection impact assessments (DPIAs)
Processing using innovative technologies triggers mandatory DPIAs. Additionally, DPIAs are living documents: they must be reviewed to reflect changes to processing or risk profiles. Revisiting the DPIA in light of changes to the underlying technology of existing software products is an opportunity to ensure that there is a valid lawful basis, and that processing is fair and transparent. Detail intended outcomes for individuals, society and the organisation and consider less risky objectives that achieve the same purpose. Identify and assess risks to individuals, mitigating measures, and how individuals can exercise their information rights.
Step 3: Data processing agreements
The GDPR requires contracts with data processors that impose data protection obligations on them. Even where a service is completely standardised and contract terms are drafted unilaterally by the processor because a company elects to use a given provider, the customer remains accountable for processing as data controller. Controllers are under a continuing obligation to only use processors that provide sufficient guarantees to implement data protection by design and by default. This means that organisations must satisfy themselves that AI-enhanced processing conforms to data protection principles, facilitates individual data protection rights, and complies with the GDPR.
Compliance can be complex – no less where the regulatory framework is technical and rapidly changing. Trilateral’s Data Protection and Cyber-risk team has extensive experience in data protection and Responsible AI assessment to ease compliance challenges and future-proof your organisation. Contact dcs@trilateralresearch.com to discuss how we can support data compliance and good governance.