In their talk for the June 2017 RISCS meeting, In Madeline Carr (Cardiff University) and Siraj Shaikh (Coventry University) outlined a new project funded under the human dimensions call. Beginning June 1, 2017, the project studies the “other human dimension” – that is, not end users, but the policy makers who must assess and make decisions about evidence. Shaikh is a professor in system security; Carr is a reader in international relations who looks specifically at questions of cyber security, technology transfer, and emerging technologies from a global perspective. The project has hired a law postdoc who specialises in legal frameworks around transnational crime and a specialist in discourse analysis.
Assessing evidence poses a unique set of technical behaviour and policy challenges. The environment is fast-moving and changing constantly. The ability for a state to respond effectively is fundamental to its national security. The evidence itself can be contradictory, biased, and even politicised in cases where cyber security firms align with specific governmental and national interests. The complex matrix of interests and agendas may disrupt the clarity policy makers want. Accordingly, Carr and Shaikh shifted their focus to this other human dimension, the UK’s cyber security policy makers and advisors. This is a small and disparate group of people with varying levels of technical expertise whose responsibility goes beyond their own organisations.
There is, Carr and Shaikh observed, a distinct lack of research to support this community, despite the importance of the task they’ve been assigned. This was a gap that was noted in the 2016 national cyber security strategy.
They began therefore with the following research question: How effective are the judgements this group makes after a cyber event when it has to use available evidence to evaluate threats, risks, mitigation, and consequences? To answer it, Carr and Shaikh set out three objectives:
- Evaluate what exactly constitutes the evidence presented to and accessed by policy makers, how they privilege and order that evidence and what the quality of that evidence is;
- Identify the particular challenges of decision-making in this context and evaluate how effectively policy makers make use of evidence for forming advice;
- Develop a framework for assessing the capacity of evidence-based cyber security policy-making that can be used to make recommendations for improvement and that can be applied to other public, private, and international cohorts.
The project will comprise three work packages. Based on discussions with the project’s partners, GCHQ and the Foreign and Commonwealth Office, the first work package will begin with a mapping exercise to understand the landscape of cyber security policy makers and how they share and source evidence. In addition, the first work package will assess evidence through interviews at all levels of government, a survey, analysis, and develop a criteria-based framework. The second work package will create, conduct, and report on a policy crisis game, This is a technique that has been used widely for understanding decision-making in a crisis; Carr and Shaikh will adapt it for evaluating evidence. The games’ scenarios will be based on events that have actually happened, but the evidence will be fabricated. The third and final work package will provide analysis and recommendations, including criteria for how policy makers should better engage with evidence.
In terms of impact, the key aim is to support the UK policy community and help them understand what their weaknesses and unconscious biases are. The researchers believe the results could have potential for extension into the privacy sector via the implementation of the Network and Information Systems Directive and the General Data Protection Regulation. It could also have a capacity-building role for use by foreign governments that are also struggling to engage with evidence and make decisions.
A number of questions arose. One raised the issue of policy that’s set by legal judgements, especially those emerging from areas that talk of safety or other things rather than cyber security. Another asked how the project would evaluate the “goodness” of a decision, given the many examples of areas where good decisions nonetheless cause bad results. A third asked about the validity of the intelligence that drives much of cyber security. A fourth asked about the many times that policy-making is reactive. Currently, for example, there is a lot of focus on ransomware, but not on the underpinning issues that need to be addressed. Finally, a questioner asked whether the adversarial nature implicit in “cyber security” set a particular outcome.
In response, Carr and Shaikh said that the project’s rather narrow focus means that legal judgements are largely out of scope unless they are raised during the interviews. The project does not aim to evaluate the decisions so much as whether policy makers can discern the difference between authoritative and poor-quality evidence, what kinds of evidence are useful, and what helps them decide. Intelligence-based threat reports are one type of evidence; however, policy makers need to be critical of all evidence and understand its source and the information it’s drawing on. The project is keen to bring in proactive evidence, and believes that policy games will prove a good tool for developing capacity. Finally, the project specifically looks at the people in the British civil service who are responsible for making decisions in response to some kind of threat, which is a subset of the people engaged with other types of security.