How to develop an AI policy or code of conduct to prepare for the future

Reading Time: 4 minutes

Authors:  

Dr Mikio Akagi | Senior Research Analyst

Date: 24 June 2024

AI is quickly becoming a part of our everyday life. According to a February 2024 report, a third of European businesses have adopted or are experimenting with AI technologies, and the adoption rate is rising. AI technologies are rapidly being incorporated into products that we already use, such as search engines (including Google and Bing), productivity software (such as Microsoft Office) and media-editing software. Whether or not your organisation is actively interested in AI technology, it will soon be relevant to the way you work. If you want to take advantage of the opportunities that AI offers while controlling the risks, you will need an AI policy or code of conduct. This article explains how to create an AI policy for your organisation. 

GenAI and its risks 

Generative AI (GenAI) systems generate ‘content’ based on a prompt, and include OpenAI’s ChatGPT, Microsoft Copilot, Google Gemini and others. This content can be text, images, video, speech or other audio. Up until 2022, most commercial AI systems were ‘narrow’ AI systems. These systems are designed to perform a well-defined task, such as recommending products based on browsing history. However, many new GenAI tools are ‘general purpose’ systems (GPAI) that are meant to perform a wide variety of tasks (summarising, recommending, editing, classifying, translating, etc.). GPAI systems can be powerful tools, helping to speed up and simplify many daily tasks. However, the downside of flexibility is unpredictability. An AI policy for your organisation will help to ensure that GenAI tools are used safely and ethically. 

GenAI systems, especially those that are also GPAI, are subject to well-known risks, including (but not limited to): 

An AI policy or code of conduct for your organisation can help you to avoid these risks and derive the greatest benefit from AI tools. 

What to put in an AI policy 

There is no good one-size-fits-all code of conduct for using AI. The best policy for your organisation will depend on what you do, what regulations apply in your jurisdiction, what kinds of data your employees handle, your organisational culture, and so on. Here are some suggestions and questions to consider: 

  • Scope: Consider whether you intend for your policy to cover AI systems in general (including narrow AI) or just general-purpose generative AI. Will you have a single policy for all teams in your organisation, or different policies for teams like HR or marketing? If your organisation uses contractors, will they be covered by the policy? 
  • Privacy: Ensure that your employees understand how to recognise sensitive information such as personal data or personally identifying information, and that they do not share sensitive information with AI systems. 
  • Fact-checking: GenAI systems often produce content that is plausible-sounding but false or misleading. Under what circumstances should employees fact-check GenAI outputs? In what circumstances should employees avoid relying on GenAI in the first place? 
  • Transparency and documentation: Sometimes you might wish to disclose when an AI system was used to make a decision or to produce content. The EU AI Act will require such disclosures in some circumstances. You may need to prepare for these disclosures by documenting your use of AI tools during your work. 
  • Training: Employees who use AI tools need to be aware of the practical, legal and ethical issues around their use. This awareness should include the benefits and risks of using AI tools, legal obligations about privacy and transparency, and ethical risks such as those mentioned above (false information, plagiarism, bias and environmental impact). The EU AI Act requires deployers of AI systems to promote AI literacy.
  • Procurement and adoption: How will you decide which AI systems to adopt in your organisation? Do you have special procurement procedures for AI products, like some public sector organisations? Do you develop your own AI software, or co-design bespoke software with a developer like Trilateral Research? Do you document your AI tools in a centralised AI register?
  • Compliance: What regulations apply to your organisation? For organisations that must comply with the EU AI Act, there may be special requirements especially for ‘high risk’ AI systems. 
  • AI governance: Time and attention are required to manage AI tools responsibly. Consider creating a dedicated AI governance framework. For example, you might appoint a senior AI officer, or an AI board to advise on tricky issues. 

How to create an AI policy 

The items above are partly suggestions, and partly questions about how best to use AI in your organisation. How will you decide how to answer them? 

  • Consult relevant experts, including experts in compliance, IT and technology ethics. 
  • Consult stakeholders in your organisation who might benefit from using AI in their work. Seek input across teams, including HR, marketing/PR, and other functions in your organisation. Involve managers who will eventually enforce the policy and answer questions about it. 
  • Plan to update your policy regularly over the next few years. AI tools are changing quickly, so it’s better to adapt rather than trying to predict the future too far in advance. 
  • Don’t ban GenAI outright. Employees have tended to ignore such bans, and will only do so more as AI tools become more normalised and easier to access. 

To develop your own AI policy, start with the steps above or reach out to us at Trilateral Research. Our Ethics Innovation Team has extensive experience developing tailored AI policies, including policies designed for compliance with the EU AI Act. If you need help, please contact us to discuss your requirements. Our team would be happy to help. 

Related posts

Let's discuss your career