Generative AI: Capabilities, Risks and Safeguards

Reading Time: 5 minutes
Generative AI Capabilities Risks and Safeguards

Authors:  

Benjamin Daley | Data Protection Advisor
Trilateral Research |

Date: 26 October 2023

Rapid advances in Generative AI (GenAI), which creates text, images, and media – drawing on the patterns and structure of input data to generate new data with similar characteristics – has seen its use grow over the past few years. Predictably, we are also witnessing how technological development is outpacing regulatory developments, exposing organisations to risks arising from ungoverned use of GenAI. 

Regulations are being drafted, published, and enacted at pace across the globe. Our recent article featured our interpretation of the UK Government guidance on GenAI. Furthermore, familiar regimes such as the GDPR and The Charter of Fundamental Rights, along with various other legislative instruments, already have principles that are applicable to AI. However, such a pace of change presents a key difficulty for those developing and using AI tools, balancing innovation with compliance against various governing principles and emerging regulations. 

A responsible AI approach fosters confidence in safe, thoroughly considered, and effectively governed use of GenAI. This article will present some current use cases for GenAI within organisations, the risks arising from these activities, and our advice on safeguards and mitigations against these. 

Introducing Generative AI Applications and their Use 

The benefits of GenAI applications are immediate and apparent in efficiency, accuracy, and cost, hence their use has grown rapidly. In this section, we list the current key applications of GenAI and their use, to assist organisations to better keep track of projects within which these are most likely used. 

 

GenAI Application Common Use Examples 
Text – chatbots interpret a text-based user query and produce a text-based answer, sometimes with access to the internet as well as the training model for its knowledge base, as a Large Language Model (LLM). Creative writing – marketing and research activities are the most common examples of chatbot usage in a professional setting. GenAI can suggest titles or topics for a campaign or paper, or entire sections of content in text-based work. Data analytics – chatbots process Big Data with increasing speed and improving accuracy, independently generating various report types. This is useful for document reviews, analysing compliance with a given set of rules. 
Image – text-to-image programs follow a similar process to chatbots, instead producing an image-based output from a user’s prompt. The image can then be developed with further prompts, as revised editions of the original image, to refine the output closer to the user’s envision. Creative works – imaging programs allow organisations to realise their identity as a logo, house style, or series of company materials. This may extend to presentations, with generated works better aligning to an organisation’s identity than stock images. Infographics – converting figures and statistics into infographics is often a manual process. GenAI offers the opportunity to streamline this process, drawing information sources such as trend analyses. 

 

Audio – audio generator technology builds upon text-to-speech machines, interpreting audio inputs and altering these for a given output. Language translation – real-time language translation is firstly made possible through live captioning, with the program interpreting one spoken language and presenting written captions with equivalent meanings in an alternate language. Voice changing – voice changing technology processes audio data, converting this to a different language, to generate an audible output with the same meaning as the spoken words – in real time. This allows conversations to simultaneously be held in multiple languages. 
Synthesis – synthesis AI reverses the above processes, utilising Big Data techniques to interpret large data sets and label items accordingly, mirroring patterns and relationships to create a synthetic dataset. AI model training – GenAI provides synthetic datasets for a variety of uses, upholding principles relating to the processing of personal data by training programs and conducting analysis on artificial information, with the same patterns and properties as real-world data. Data analysis – narratives often clarify presented numbers, infographics, and trends. GenAI quickly interprets data to provide meaningful insights from cluttered information. 

 

Potential Risks Stemming from Ungoverned Use of GenAI 

The accessibility of GenAI has the potential to bring many social and economic benefits, including efficiency, creativity, and accessibility. However, its use may also present significant legal, political, and technological considerations for organisations, in the following areas: 

1.  Data protection 

Public GenAI tools make input data accessible to the tool provider and potentially other users, who may receive this information in response to their query. This may not be clear to all users, some of whom may be using GenAI for the first time. Should the input include personal data, this will result in a data breach. A guidance for users on how to use GenAI safely and securely will be a useful starting point, as well as a policy around the use of GenAI and other AI applications.   

2. Ethics 

Examples of bias in AI tools are wide reaching and damaging to those affected. These are often the result of a biased training sample, compared to the target population demography. Considering bias should be an ongoing task, throughout all stages of GenAI development and use. 

Confirmation bias of any human-in-the-loop must be proactively managed, ensuring users review outcomes thoroughly and implement effective peer review processes, such as a double-blind approach, as a crucial safeguard when using GenAI outputs. Awareness raising and guidance to staff around GenAI and ethical issues, such as bias, is also imperative to mitigate against this type of risk. 

3. Legal 

GenAI outputs may incorporate or derive from existing protected literary, dramatic, musical, and artistic (LDMA) works, potentially exposing organisations to plagiarism accusations or copyright infringement legal proceedings. Organisations should therefore ensure that GenAI-assisted LDMA works used for official purposes are authentic originals, issue guidance to staff, and consider a public disclosure where GenAI has been used.

4. Trade Secrets 

Public GenAI tools make data inputs visible to the provider and potentially other service users. Commercially sensitive data therefore should not be used, as this exposes organisations to commercial data incidents. Guidance to staff on acceptable use of GenAI, as well as governance around the use of licenced applications, rather than public tools, should be implemented. 

5. Reputation 

GenAI tools have the potential to articulately present misinformation, making it difficult to spot errors, truth, or sinister disinformation campaigns. Using such information therefore exposes organisations to reputational risks of loss of credibility and trust. Procedures such as source validation, along with vigilant quality assurance, should be put in place to ensure that accurate and appropriate information is published by organisations.  

 

Risk Mitigation and Safeguards 

Implementing safeguards supports data protection by design and default regulatory requirements, and takes into consideration responsible use of GenAI. Below is a non-exhaustive list of recommendations for organisations who are starting out on their GenAI journey. 

 

1. AI discovery and risk identification 

Identify and map AI tools currently in use with your organisation and assess risks related to personal data protection and data ethics arising from their use. For each tool, the assessment should map potential legal, data protection and ethical risks when using these tools in specific business practices. Use the risks to create a set of internal requirements around responsible use of AI. 

2. Data Protection Impact Assessment (DPIA) 

GenAI usage meets the DPIA requirement threshold under GDPR, when personal data is processed. Undertaking a DPIA provides a comprehensive assessment of purposes, necessity and proportionality, risks to subjects’ rights and freedoms, mitigating measures, and overall compliance. Specific assessment of ethical risks can be built in to the DPIA process to ensure that these are also considered and mitigated against. 

3. AI and GenAI policy development 

Develop a policy for the internal use of AI. Make sure you draw on of expert knowledge, so that you capture the most up-to-date and relevant aspects when it comes to current and future use of AI in your organisation. Also include a focus on the technical architecture and capabilities of AI tools and draw up frameworks that can be implemented to support responsible use.   

Establishing a robust governance infrastructure mitigates risk, ensuring that GenAI is used safely, fairly, and effectively. The recommendations above are a starting point for organisations whose staff are using AI tools and undertaking this work sends a strong message about responsible use and development of AI. 

 

Ethical AI solutions are the central purpose at Trilateral Research, in an established ecosystem of Data Protection and Cybersecurity services. For more information, please contact our advisors at dcs@trilateralresearch.com to discuss your requirements. Our team would be happy to help. 

Related posts

AI governance is a crucial – and ongoing – practice that can help you deploy and maintain responsible AI solutions.   Here, we answer the…

Let's discuss your career