A Surfeit of Data and the Importance of Explainability for Defence

Reading Time: 2 minutes
moritz kindler G66K ERZRhM unsplash

Authors:  

Dr Zachary J. Goldberg | Ethics Innovation Manager

Date: 17 November 2021

The defence sector is well aware of the strategic and operational benefits of harnessing big data. Both in kinetic and non-kinetic contexts, big data can provide mission critical information that can save lives and keep people safe.  

Over the past few years, the high volume of data generated by military actors, NGOS, and human security stakeholders has reached almost overwhelming levels. As a result, an urgent question for the defence community is not only how to access more data, but how to process large volumes of data faster – in other words, how to automate data processing through artificial intelligence (AI). However, accompanying this task are critical questions concerning what information is collected, how algorithmic models and other tools process the data, and why a particular result is generated. Users of AI in defence want to understand what, how, and why specific information is informing their decision process, especially when lives are at stake.  

Establishing trustworthy AI 

The ultimate goal of addressing these questions around how AI-driven results are generated is not simply to offer explanations for their own sake, but to generate trustworthy AI insights that defence can deploy, with adequate confidence, to enhance strategic and operational decision-making. To achieve this end, we employ a detailed methodology around five queries to communicate  

  • what (data explainability)  
  • how (algorithmic explainability)  
  • why (output explainability)  
  • when (context)  
  • to whom (end user)  

This method facilitates deeper understanding on the part of our defence users, which in turn supports better informed decision-making. Furthermore, by clearly explaining the whathow, and why of the tool’s data analytics and insights, the risk of automation bias is mitigated as the end user can more clearly identify the scope of the tool’s insights.  

Next, in building and assessing machine learning models, we do not only determine levels of accuracy but also scrutinise inaccuracies and their possible causes. For example, it is not sufficient to know that an AI model is 90% accurate. We must also understand and communicate to defence users which particular factors sit behind its 10% inaccuracy. Understanding why a tool might be mis- or distrusted is essential for establishing trustworthy AI, understanding the opportunities and limitations of its outputs, and thereby supporting better informed decision-making.  

At Trilateral Research (TRI), Explainable AI (XAI) is a principal driver behind our design of AI-based technologies. TRI-XAI is the vehicle through which we establish transparency, understanding, autonomy, decision-support, non-discrimination, and mitigation of automation bias, with the explicit purpose of creating a trustworthy AI solution.  

Technicality on its own does not lead to trust 

Understanding why a tool might be mis- or distrusted is not only a technical pursuit. Our sociotech team leverages social science, data science, and technology development expertise to build a thorough understanding of the nature of trust, distrust, and mistrust concerning human-automation interaction in the defence sector. While there are several theories offering insight into the nature of trust (Barber 1983; Pruitt and Rubin 1986; Mayer et. al 1995; Lee and See 2004; Hoff and Bashir 2015), this plurality of compelling theories challenges the possibility of settling on a single definition of trust.  

 Trust can be a disposition, a mental state, an emotion, socially-learned expectations, situational, or a combination of these factors. Nevertheless, these diverse theories have identified a key commonality: trust is instantiated via a triadic relation involving a truster, a trustee, and a situational context characterised by uncertainty or risk, in which something is at stake whether one trusts or distrusts. We use these insights to inform our development of explainability features within our products in order to foster and support trustworthiness.  

Given the surfeit of data available to defence decision-makers, the crucial question is not only how to collect more data but also how to collect the right data. The fundamental question today is how to quicken defence’s data processing capabilities via automation in an explainable and ethical way, generating trustworthy AI-based decision-support tools, which defence end users can have confidence in. Indeed, with the strategic and operational decision cycle increasingly dependent on processing large volumes of data fast, this question will continue to define the understanding and decision-making context in defence.

For more information on our approach to Explainable AI read more about our Sociotech Approach and contact our team: 

Related posts