AI and Democracy: Our Evidence Submission to Parliament

Reading Time: 3 minutes

Authors:  

Trilateral Research |

Date: 19 April 2024

The rise of Artificial Intelligence (AI) puts democracies around the world at risk. Bad actors are using AI to spread false information, influence public opinion, and interfere with elections. At Trilateral Research, we believe that protecting democracy in the digital age requires active collaboration among governments, civil society, and the private sector.

Our recent submission to the UK Parliament’s Call for Evidence, written by our co-founder Dr David Wright and researcher Dr Richa Kumar, drew upon learnings from the EU-funded ATHENA project, which Trilateral coordinates. Through ATHENA, we are collaborating with several organisations to investigate foreign information manipulation and interference (FIMI) case studies. Together, we analyse malicious actors’ tactics, develop novel AI-based countermeasures and recommend policy measures. Despite only kicking off recently, ATHENA has already yielded valuable insights, some of which can be seen in our blog post exploring FIMI in the 2024 elections.

One of the most prominent threats to democracy noted in our submission is the spread of AI-generated content. Those using Generative AI tools have the ability to create content that deceives voters and manipulates public opinion. Algorithms that spread this content may create false perceptions of consensus, while also increasing social divisions or legitimising controversial views. Many users struggle to differentiate between real and synthetic content, leading to uncertainty and anxiety.

As part of our submission, we listed several key policy recommendations that would help mitigate the negative effects of generative AI:

1. Mandatory Labelling and Transparency

Tech companies should be required to clearly label and watermark AI-generated content. This is essential for fighting the spread of AI-generated disinformation, especially in political advertising and elections. By clearly distinguishing AI-generated content from human-created content, we can help citizens critically engage with the information they encounter. If possible, AI-generated content should point to the original source that is based upon to help educate viewers.

2. Accountability Measures

To protect the integrity of our democratic processes, we recommend strict bans on AI-generated content that has been manipulated to deceive, scam, or confuse. This is especially important for any AI content that may affect electoral processes and undermine the foundations of democracy. Politicians and companies should be held responsible for producing, hosting, and spreading AI-generated content that can influence public opinion and worsen existing divisions Strong accountability mechanisms, including penalties for violations, are necessary.

3. Data Access and Sharing Frameworks

Countering AI-powered disinformation effectively requires a collaborative, data-driven approach. However, researchers and watchdogs often face challenges in accessing relevant data. We recommend establishing clear legal frameworks and technical standards to enable secure, privacy-preserving data sharing for the purpose of defending democratic integrity. The goal should be to empower a network of trusted partners to continuously monitor and mitigate disinformation risks through real-time, data-driven insights.

4. Citizen Empowerment and Resilience

Empowering citizens to navigate the complexities of the digital information landscape is essential for building resilience against AI-powered disinformation. We recommend organising citizen assemblies, world cafes, and other methods to educate voters on identifying AI-generated content. By giving individuals the necessary tools and knowledge to spot manipulated information, we can create a more informed and discerning electorate.

5. Media Responsibility

Social and traditional media play a crucial role in shaping public discourse. To combat the spread of AI-generated disinformation, we recommend that media outlets focus on responsible reporting and avoid amplifying stories whose only newsworthy aspect is the use of AI-generated content. By discouraging the spread of misinformation and manipulation through AI, media organisations can contribute to a healthier information environment and support the integrity of democracy.

The policy recommendations we have proposed are crucial for protecting democracy against the growing threats posed by AI-powered disinformation and foreign interference. At Trilateral Research, we are dedicated to driving collaborative solutions and innovative countermeasures to ensure a strong and resilient democratic future. As a trusted leader in ethical AI, we are ready to collaborate with policymakers, industry partners, and civil society to refine and implement these proposals. Together, we can address the challenge of defending democracy in the digital age, ensuring that AI strengthens, rather than weakens, the democratic values we cherish.

Related posts

AI is rapidly transforming industries. Take for example the legal field, which is traditionally conservative in relation to technology. More…

Let's discuss your career