Assessing the Ethical Implications of Artificial Intelligence In Policing workshop – 14th ACM Web Science Conference

Reading Time: 4 minutes

Authors:  

Trilateral Research

Date: 30 March 2022

Trilateral Research is organizing the half day workshop “Assessing the Ethical Implications of Artificial Intelligence In Policing” to be held as part of the 14th ACM Web Science Conference (WebSci’22) taking place 26-29 June 2022 in Barcelona (and hybrid).

Workshop theme

The adoption of artificial intelligence in the context of Policing includes significant benefits with accompanying ethical challenges to individuals’ rights, societal wellbeing, and societal values. For example, the police can use AI to collect intelligence and analyse data that enhances police decision making and help identify individuals in need of safeguarding; however, AI models also rely on profiling to make predictions. As such, using attributes of ethnicity, race, gender, and socio-economic status for AI models has the potential to curb individual freedoms when assessing the risk of criminal behaviour and thus make false or even unethical predictions. Owing to both the crucial benefits and potential harm to rights and values, AI in policing requires rigorous ethical scrutiny.

Crucially, ethical scrutiny must engage all aspects of the policing context. The context for adopting AI is neither solely technical nor solely societal in importance; it is a socio-technical interaction whereby technological systems (law enforcement or safeguarding) are designed by humans (developers) for humans (police) with an impact on individuals (citizens, residents, suspects, victims) and their relationship with the police. With this socio-technical complexity forming the policing context, this workshop seeks to stimulate discussion about the ethical implications and share methods for assessing these implications by using interdisciplinary approaches to the topic.

Police Car 1

Workshop Objectives:

The goal of the workshop is to address the research question, “how can the Police and developers assess the ethical impact of AI in policing?”. This question arises from Trilateral Research’s recent experience with assessing the ethical impact of AI in policing, during which we have identified two main problems. Firstly, unethical or insufficiently ethical AI tools continue to be developed for, and deployed by, police, therefore, how can the police and developers assess ethical impacts? Secondly, there appears to be a lack of widespread, nuanced awareness concerning how to evaluate the ethical implications of AI tools in policing, therefore, how do the police and developers co-create “ethical by design” AI technologies? In addressing these questions across three interdisciplinary streams of ethical theory, Explainable AI and co-design, this workshop includes theoretical, socio-technical and practical insights.

Workshop Streams:

1. Conducting an ethics assessment of AI in policing

The first workshop stream focuses on ethical theory and ethical frameworks for assessing the impact of AI in policing. The presentation by Dr. Zachary Goldberg will synthesise contributions from philosophical ethics, ethical AI guidance documents (e.g. HLEG AI on creating trustworthy human-centred AI, College of Policing Code of Ethics, ALGOCARE), and assessment methods from three different projects, including two H2020 projects (Inspectr and Roxanne) and an InnovateUK project (CESIUM). Two of the principal ethics requirements that will emerge from these first two presentations are explainability and co-design with end-users. These requirements formulate the following two streams.

The second presentation will be selected from submissions to the call for papers (see below).

2. Applying Explainable AI in a policing context

The second stream focuses on Explainable AI in policing. The first presentation by Stephen Anning will be a case study of Project CESIUM to show an application of O’Hara’s (2020) WebSci2020 Explainable AI paper. Project CESIUM is a decision support tool for caseworkers of child exploitation in Lincolnshire. O’Hara views Explainable AI as a human-machine system in which algorithms provide meaningful contributions to an explanatory dialogue. We will show how the explanatory dialogue of CESIUM draws upon the Multi-Agency Child Exploitation (MACE) process and policing intelligence cycle. We will then show how our algorithms meaningfully contribute to this dialogue with outputs and accompanying rationales. The second presentation will be an open call to other practitioners who are similarly applying Explainable AI. For the Q&A, we will invite a critical analysis of these case studies.

The second presentation will be selected from submissions to the call for papers (see below).

3. Co-designing with Police

The third stream is about enabling co-design between the Police and Developers. The Police will only get the AI systems they ask for; therefore, co-design between end-users and developers ensures the co-creation of fit-for-purpose tools. Nonetheless, co-design is not without its practical challenges. The stream will include two presentations from police representatives, who will describe the opportunities and challenges of co-designing AI.

Call for Papers (Multiple)

We invite submissions for the first two workshop streams:

1. Conducting an ethics assessment of AI in policing

2. Applying Explainable AI in a policing context

For each stream, please submit either an abstract of 750 words or a camera ready “short paper” by May 1, 2022 to both of the organisers.

Only “short papers” are eligible to be included in the published conference proceedings.

To be eligible, “short papers” must use the ACM template and adhere to the conference submission formatting and conditions, which can be found on the WebSci’22 website.

Submissions ought to adhere in scope and content to the general conference’s objectives in terms of interdisciplinarity and practical impact.

Authors should submit to only one of the open streams.

Travel or other costs cannot be reimbursed by Trilateral or WebSci’22. Presenters must pay the conference registration fees established by WebSci’22.

WebSci’22 is a hybrid event. If the Covid-19 situation allows, we very much hope to see workshop participants in person in Barcelona.


Decisions will be sent out May 10, 2022.

Related posts