Artificial Intelligence in Law Enforcement Agencies: addressing the challenges and building trust

Law Enforcement Agencies (LEAs) have been increasingly relying on Artificial Intelligence (AI) technology to prevent, investigate and combat crime. Innovative technologies promise more effective citizen and border protection, safety, and security.

Yet, the use of AI by LEAs has drawn significant controversy emerging mainly from a lack of transparency and accountability, training on ethics, concrete regulatory frameworks, and cases of misuse of AI.

Fostering trust in AI use in the security domain

The Horizon 2020 project popAI aims to foster trust in the application of AI in the security domain by increasing awareness and social engagement while gathering knowledge and expertise from multiple sectors (e.g., academic and non-academic actors).

This approach will offer a unified European view of AI use in the security domain and set the grounds for the development of a European AI hub for Law Enforcement that promotes the ethical and socially sustainable application of emerging technologies by:

  • Building novel taxonomies of functionality, legal caseworks and ethical principles of AI in LEAs feeding into an ethics toolbox.
  • Mapping the ecosystem of AI tools in the security domain in order to identify, explore and analyse all stakeholders’ (including citizens, NGOs, and LEAs) relevant fears and concerns.
  • Integrating stakeholders’ insights to produce recommendations and guidelines for LEAs, civil society, and technology developers to ensure an ethical development and application of AI in the security domain.

Mapping the controversies around AI in LEAs to ensure an inclusive approach

Trilateral maps the controversy ecosystems of AI tools in the security domain. To gain insights into the diverse experiences of European LEAs and citizens, Trilateral draws from empirical research to identify who – and about what – is shaping the concerns on AI in security. This mapping ensures the inclusion of diverse experiences of all relevant stakeholders.

Data analytics to understand citizen discourses

Trilateral adopts a sociotechnical approach analysing social media to better understand citizen discourses around AI and security controversies. Trilateral uses computational methods such as social listening and data mining to gain a greater depth of understanding on the fears, concerns, and optimism around the use of AI by LEAs.

Multi-disciplinary foresight scenarios to promote collaboration between stakeholders

To foster communication between diverse disciplinary perspectives, Trilateral co-produces with relevant stakeholders (citizens, LEAs and experts) a set of foresight scenarios in relation to diverse AI controversies. The scenarios will examine what a good and bad future could look like depending on how AI is engaged by LEAs. This participatory activity allows a multi-disciplinary foundation for awareness needs and pathways for fostering future engagement.

For more information and updates visit the popAI website and follow us on Twitter and LinkedIn.

EU flag yellow low e1523448262817
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101022001.

Learn more about our research in the field of

Law Enforcement and Community Safeguarding

Let's discuss your career