In this blog, Toby Fenton, Associate Product Manager at Trilateral Research, analyses the concept of risk and our approach in developing risk assessment methodologies in the project Solebay.
Risk is an inherent part of life. We often talk of ‘risky’ behaviour, ‘taking risks’ and being ‘risk adverse’, and the language of risk is commonplace across professional domains from finance to computer security, politics to engineering, terrorism to public health. Risk does not always equate to something negative – we speak positively of ‘high risk, high reward’ or ‘high risk, high payoff’ activities – but it usually conveys a perceptible sense of danger. Unsurprisingly, it makes for good headlines: “Cyber attacks are the biggest risk, companies say”; “Neck scan reveals risk of dementia”; “Britain ‘at risk of blackouts without more gas storage’”.
It is not too difficult to understand what ‘risk’ means, at least conceptually. In his 2011 book Risk Assessment – Theory, Methods and Applications, Marvin Rausand explains that identifying and assessing risk revolves around three core questions:
- What can go wrong: what hazardous event(s) might produce damage or harm?
- What is the likelihood of those events occurring?
- What are the negative consequences if those events do occur?
This standard formulation of risk as a function of likelihood(or probability) and consequences(or impact) is explicitly articulated in a range of contexts – from managing information systems to conducting military operations. The dual dimensionality is crucial, as Yuval Harari illustrates in a passage in his 2016 book Homo Deus:
“Some people fear that today we are again in mortal danger of massive volcanic eruptions or colliding asteroids. Hollywood producers make billions out of these anxieties. Yet in reality, the danger [i.e. risk] is slim. Mass extinctions [i.e. impactdimension] occur once every many millions of years [i.e. consequencesdimension]. Yes, a big asteroid will probably hit our planet sometime in the next 100 million years, but it is very unlikely to happen next Tuesday.”
At Trilateral Research we are working with the UK Ministry of Defence on Project Solebay, which is funded by the Defence and Security Accelerator (DASA) and seeks to develop a risk assessment methodology to support the UK military’s response to modern slavery. However, our primary and secondary research has revealed that identifying and assessing risk can, in practice, be far more difficult than the above formula suggests.
From the well-known DASH checklist (which looks at the risks of domestic abuse, stalking and harassment, and honour-based violence) to the UNHCR’s ‘Heightened Risk Identification Tool’ (developed to assist Syrian refugees in Jordan), a significant proportion of the risk assessments we have reviewed in the domains of human security, policing and public safety often focus almost entirely on one dimension of risk – usually likelihood, perhaps due to the difficulty of measuring and quantifying consequences– and/or articulate ‘risk’ in confusing ways. Although there are some exceptions to this, such as the law enforcement risk management tool MoRiLE, which explicitly looks at both likelihood and impact/consequences.
Many of our interviewees – which include experts on risk assessment from the public and private sectors – have corroborated these challenges in identifying and assessing risk. They have noted, for example, that the likelihood/consequences model is too linear for the social world, that risk assessments are indeed often just assessing likelihood, that some people treat risk assessment as a tick box exercise, that many in industry prefer intuitive judgement over rigorous assessment, and that assessing risk can be largely a guessing game. As one of our police interviewees remarked, ‘risk’ is something which “float[s] off the tongue as simply as other things, as simply as ‘good morning’… It’s just what everybody says… But if you actually nail somebody down and say ‘What does that mean to you?’ people say ‘Ohhh I’m not quite sure.’ Because I think it’s easy to say and difficult to understand.”
In fact, the discourse of risk might be part of the problem, as Jack Dowie has argued. Dowie outlined an imaginary conversation between Humpty Dumpty and a decision analyst. Humpty says: “If I sit on the wall there is a risk of my falling, breaking and becoming, to put it bluntly, an ex-egg. I risk death – or at least disability. That is why I want your help in assessing whether the risk of my sitting on the wall is too high to be acceptable.” To this seemingly reasonable request, the decision analyst replies:
“The multiple and confusing ways you have just used ‘risk’ should confirm that we are well rid of it in making this decision… [Y]ou first used risk as a synonym of probability or chance in relation to an event or outcome (‘my risk of falling or dying’). Secondly, you used risk as a synonym for the harm or loss you might suffer – the death or the disability itself, as distinct from the chance or probability of those two undesired outcomes. And, finally, you talked about the risk of wall sitting – an action, not an event or an outcome. You imply by your earlier question (is the risk of engaging in wall-sitting ‘too high?’) that this ‘risk’ must be some sort of compound summation of all the consequences of the action and their chances of happening – an integration of the chance assessments and outcome evaluations… [T]here are perfectly good, precise and accurate terms for all these concepts – and ‘risk’ is not appropriate for any of them. In fact, any time ‘risk’ is used in the context of serious decision-making it can and should be replaced by a term or terms capable of carrying the requisite analytical burden.”
The challenge in bridging the gap between the conceptual understanding of risk and the everyday discursive articulation is well recognised. Stan Kaplan, for instance, stressed the difficulty of conveying risk in numerical or mathematical terms, while Steve Frosdick emphasised that qualitative risk processes, in fact, have their genesis in identifying technical failures in hazardous technological industries. Perhaps this has not translated too well to the social world, where quantification is problematic, subjectivity is high, and it’s hard to know what someone really means when they talk about ‘risk’.
As the multidisciplinary team working on Project Solebay seeks to address these conceptual and practical challenges throughout our project, we have found it useful to be guided by the fundamental purpose of conducting a risk assessment. As Rausand tells us, “[t]he objective of almost any risk assessment is to support some form of decision-making where risk is an important decision criterion.” This places importance not merely on arriving at a particular risk number (‘6’) or risk category (‘high’), but on ensuring that the processes underpinning that result are logical, robust and justifiable – and that the resulting risk level itself is meaningful and understandable.
Crucially and inescapably, this requires providing a clear rationale and evidence base for determining likelihood and consequences. Because if “[a]ttempts to devise a standard classification of risk levels… fail… [to] address the separate conceptual elements [of likelihood and consequences],” as Dowie remarks, “then they are no longer concerned with ‘risk’.”
For more information on our work in this area of research, please contact our team at firstname.lastname@example.org