At Trilateral Research we provide ethical AI solutions for tackling complex social problems in the public sector. In Project Iron, funded by the Defence and Security Accelerator (DASA), we are conducting innovative research to develop explainable AI that can help combat organized crime (OC). The work will focus on three principal areas: graph network analysis, innovative explainability features and an ethical AI framework applicable to the research and development of these features.
Understanding networks, the interconnected nature of people, places, events, vehicles, etc., is key to understanding organised crime. Mapping organized crime groups (OCG) is a significant challenge as the connections among offenders and criminal groups are difficult to ascertain. In direct response to end user needs, Project IRON seeks to use network analysis techniques to assist the police in their understanding of criminal networks to help tackle organised crime. Examples of technical approaches include performing clustering techniques to identify groups of people working together, or calculating centrality metrics to understand key people/locations within a particular network. Trilateral will look to enhance graph network capabilities, which can: identify groups of offenders working together, identify if an individual crime is part of larger OC and identify key linkages between OCGs.
Artificially intelligent solutions do not benefit police or safeguarding organisations if the inner workings of the algorithms and algorithmic output are not explained in clear and meaningful ways. Project IRON’s solutions augment decision-making by including effective and meaningful explainability features. Explainability is among the most crucial aspects to be included within the overall methodological framework for IRON, as it helps achieve transparency, enhanced autonomy, decision support, lack of bias and facilitates trust in the tool, thereby resulting in human-centered, ethical AI. For Trilateral Research, putting humans at the center of its design and development work means prioritizing the use, understanding, trust and impact for societal good of its automated tools. By leveraging expertise in ethical theory, and by distilling relevant insights from principles for ethical AI recommended by reliable organisations such as AI HLEG (EU), OECD (UK) and the Alan Turing Institute (UK), the IRON ethics team embeds ethics-by-design into the both research and the technical development processes.
Project IRON is funded until the end of July 2023. Stay tuned for further updates as the technical, explainability and ethics work progress.
Contact Zachary.email@example.com for questions or feedback.