AI in Law Enforcement: Balancing power, innovation and ethics

Reading Time: 4 minutes

Authors:  

Dr Alex Murphy | Research Communications Officer
Dr Joshua Hughes | Cluster Lead - Law Enforcement and Community Safeguarding
Dr Hayley Watson | Director, Sociotech Innovation
Trilateral Research |

Date: 2 November 2023

Artificial Intelligence (AI) is revolutionising science and industry, and extending increasingly into citizens’ daily lives. Concerns about how these powerful new tools might disrupt relations between the public and state are widespread, particularly in the law enforcement arena where tensions (such as those embodied by the Black Lives Matter movement) are routine. Within Trilateral Research’s Law Enforcement and Community Safeguarding (LECS) cluster, our researchers are advancing understandings of AI ethics through diverse research projects. This work reflects Trilateral’s sociotech approach, in which social and technical considerations are integrated into the design process to push forwards responsible innovation. For instance, our ethical AI application CESIUM aids safeguarding partnerships that protect vulnerable children, showcasing the positive societal impact of Trilateral’s approach.

 

The evolution of AI in law enforcement

The AI regulatory landscape is changing. Debate in the USA is in its infancy, with a new Executive Order offering indications of the approach, while China has taken significant regulatory steps in advance of a national law. While the UK has undertaken important action, Europe stands as the global leader in its attentiveness to the risks and benefits of AI, five years since GDPR set a global benchmark for data protection. The AI Act is proceeding through the European Parliament and categorises risk levels for AI across sectors, covering the management of general purpose tools like ChatGPT and high-risk aspects like biometric identification. Law enforcement applications are a focus of scrutiny, with the evaluation of evidence, deception detection, and profiling labelled as high risk, and real-time biometrics, offender risk assessments, and technology for inferring emotions expressly prohibited. This caution is merited. In the law enforcement space, the UK’s Gang Violence Matrix has proved in recent years to be both inaccurate and a serious breach of data protection law. Deployed by the Metropolitan Police to assess individuals’ risk of involvement in criminality, the tool mistakenly drew thousands of innocent adults and minors into its orbit, replicating racial biases and embedding flawed data within its ‘black box’. Meanwhile, Clearview AI illegally processed more than three billion images from the public internet and social media to feed its facial recognition platform, with police and private organisations around the world employing the application. Fines and legal challenges in the USA, Canada, UK, Germany, Australia, France, and Italy reflected the global condemnation. Other examples of misguided or reckless law enforcement AIs offer similar warnings and highlight the pitfalls.

 

From theory to practice: How CESIUM and TRACE are shaping policing

Trilateral uses its expertise in social, legal, and ethical issues to ensure that high-risk tools are developed responsibly. Law enforcement AI, wielded by the state to curtail citizens’ liberties, can be extraordinarily hazardous. At the same time, the use of algorithms to drive efficient, rigorous investigations has the potential to protect the public like never before. The principle of ethics-by-design is valuable, and means that privacy, respect, and fairness must, and can, be built into AI tools from the start of their development. CESIUM reflects this, operating in a complex environment in which child exploitation overlaps with other offences and occurs behind closed doors, and where official responses have been stymied by weak data sharing and coordination between agencies. It empowers safeguarding professionals with secure access to data across multiple agencies, and augments decision-making with transparent, explainable AI tools designed to identify children at risk. This is achieved by modelling the vulnerabilities which typically precede referral to ‘safeguarding partners’ (i.e., a local authority, medical, or police organisation) and awarding a prioritisation score that guide decision makers (such as Lincolnshire Police) in protecting children. Current and recent LECS projects like ALUNA, CEASEFIRE, DARLENE, GATHERINGS, HEROES, INSPECTr, popAI, PREPARE, ROXANNE, and TRACE are also helping law enforcement agencies tackle crises, from firearms trafficking to violent extremism and organised crime. In all these projects, the sociotech approach plays a crucial role in the development of robust and ethical technologies.

TRACE, for example, is creating AI tools to tackle money laundering, bringing together law enforcement, researchers, and technical partners. The prospect of powerful software tracking illicit financial flows through the surface, deep, and dark web, gathering hidden data to facilitate asset recovery, is obviously concerning and requires significant safeguards to mitigate the risk of misuse. As leader of the project’s ethics workstream, Trilateral implements rigorous procedures to guarantee compliance with ethical standards. These cover the process of data capture and acquisition (via police data or web crawling, for example), through to the ethical monitoring of varied workstreams engaging with citizens and law enforcement agencies. Trilateral is also committed to reducing the potential for TRACE tools to inadvertently entrench discrimination and stigmatisation, breach privacy illegitimately, or create harm for participants, third parties, and communities.

 

The path to ethical AI in law enforcement

Our researchers frequently provide guidance on responsible data use, including with the most sensitive personal data. Informed consent is one ethical lodestar, and ensures that participants grasp their rights and obligations. The engagement of multidisciplinary Ethics Boards of external experts gives additional perspective to researchers and end users of these technologies. These measures are standard parts of Trilateral’s approach to law enforcement research projects, but investigative tools raise particular ethical challenges. Trilateral has focused its work on dealing with the risks of intrusion, errors due to bias and discrimination, damage to trust (in authorities or science), and potential social chilling effects (when, for example, policing tools uses machine learning techniques to extrapolate human networks). Law enforcement AIs will not necessarily create these alarming ethical problems, but responsible research requires the assessment and mitigation of such risks. The functionality of these technologies must be carefully constrained, with control structures and safeguards embedded at every level.

In 2019, the EU’s High-Level Expert Group on AI (HLEG AI) published a list of seven concrete ethical requirements for trustworthy AI systems. These emphasised human agency, liberty and dignity; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; individual, societal and environmental wellbeing; and accountability. These guidelines were developed alongside the SHERPA project (in which Trilateral was involved) and adapted for our work on projects like CESIUM and TRACE. When it comes to law enforcement, AI has high stakes and approaches like Trilateral’s ethics Touchpoint TableTM help maximise the benefits of these new technologies. In the deployment of our own technology we emphasise a Shared Responsibility Model and the importance of Machine Learning Governance in ongoing accountability efforts. Inattention to ethical, legal, and social factors risks harm and inequality, inhibiting public trust in Artificial Intelligence and endangering human rights, privacy, and safety. Our work demonstrates that such risks can be mitigated, if there is a desire by the implicated parties to do so.

 

At Trilateral Research, we work across the UK and internationally on the development, implementation and governance of responsible AI. Get in touch to find out more.

Related posts