On this second day of the May 2018 RISCS meeting, the focus moved to uncertainty. In a fast-changing environment it’s hard to know whether the decisions we make today will hold up in the future. “Prediction is difficult, especially about the future,” the Danish politician Karl Kristian Steincke wrote in 1948; it’s even harder if you fear being laughed at because you got it all wrong.

The first day of the meeting looked at certainty in the form of metrics. However, in the fast-moving and constantly changing field of cyber security, metrics have limitations. In some areas of emerging technology it isn’t possible to have good metrics because there are so many different pathways to pursue; in others, what you’re trying to measure relates to technologies that haven’t been taken up. Even experts frequently make wildly wrong predictions: in 1996, ethernet inventor Bob Metcalfe thought the internet was going to collapse` (when this didn’t happen, he publicly ate a printed copy of his column); in 1946 movie studio executive Darryl Zanuck predicted, “Television won’t be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night.”

Helen L., the technical director of the NCSC’s socio-technical group, discussed methods her group has used to frame discussions around complexity, drawn from training on risk management that they have delivered in a few locations and are now interested in scaling up.

Understanding complex systems is not a new discipline, but we need new ways to approach it. One of the key issues is identifying where the boundary is between complex and merely complicated. A complicated system may have many interlinked moving parts, but the cause and effect within the system is well understood; you can take it apart and put it back together and understand repeatably how any change will affect the rest of the system.

Helen L. proposed that complex systems have the following fundamental properties:
– They self-organise;
– Their interactions are non-linear;
– They behave as a collective;
– The causality is complex and networked, and the relationship between cause and effect doesn’t exist, so that doing the same thing might cause a different effect each time;
– Their emergent properties are not always predictable – for example, you can’t predict “wetness” from looking at the interactions of water molecules;
– There may be patterns and trends, but predictability is reduced;
– They display fat tail power law behaviour, in that rare events are more frequent.

As examples, Helen L. suggested birds flocking, or the economic market. Making sense of such systems requires accepting that the system exhibits signs of complexity, and that because it’s changing and adapting constantly it won’t be possible to extract a valid reference point.

One example of a sense-making framework is the Cynefin Framework, originally described by David Snowden and Cynthia Kurtz in their 2003 paper The New Dynamics of Strategy: Sense-making in a Complex and Complicated World (PDF). Snowden’s company, Cognitive Edge, does both research into complexity theory and sells a commercial product. It is based on understanding that a system might have different characteristics at different times at different levels of abstraction and therefore being able to make the right kind of decision and ensure it has the right kind of effect. The framework is meant to evolve from the data put into it, based on the situation.

slide 13 “Origins”

One of Helen L.’s favourite examples to describe this is lung surgery: it’s obvious that it has causality and the same input creates the same output. There are effective checklists and it’s process-driven. It is, however, complicated; the surgeon is an expert with a degree and experience, but they haven’t performed this surgery on this person before, and since everyone is slightly different they must rely on their expert knowledge and good practice to carry out the task. Those are ordered domains, obvious or complicated.

Un-ordered domains – complex and chaotic – start where it begins to appear that something’s gone wrong in the surgery. The surgeon can no longer rely on processes and checklists; the patient is unfamiliar, something has happened, and the surgeon must try various interventions to find what works. This effort to probe, sense, and respond is much more like risk management in a complex domain. In a chaotic system, something is horribly wrong, and leadership is needed to take action to stabilise the situation and bring back order. There is no answer to the disorder in the middle.

The flow in the diagram reflects the fact that any particular system won’t stay in one domain throughout. This is where sense-making is important to identify where a system is at any given time so it’s possible to understand how to intervene. The squiggle at the bottom represents the “cliff of complacency”, describing the situation where you’re in the “obvious” domain following the checklist and haven’t yet realised that it’s no longer best practice and it’s easy to be thrown into chaos. So an important element is to keep asking whether you’re still in the domain you think you’re in.

NCSC’s interest in this approach is due to the fact that many organisations are complex or chaotic, and therefore it needs to move beyond good practice guides.

David Tuckett, a co-investigator for the EPSRC and RCUK-funded Challenging Radical Uncertainty in Science, Society, and the Environment (CRUISSE). The network’s purpose is to help practitioners understand how organisations actually make decisions and how academics can contribute to a range of disciplines. Work done in the US has shown that separating academics and users produces academic work that’s interesting but that is useless for moving forward. CRUISSE’s starting point, therefore, is decision makers; this RISCS meeting interested CRUISSE researchers because cybersecurity is an interesting issue in terms of decision making and they want to learn what the problems are. They have similarly attended meetings with the Cabinet Office, insurance companies, humanitarian agencies, and others. A recurring problem is that people try to move too quickly to certainty; CRUISSE hopes to offer reflective assistance.

The network has looked at problems such as how to prepare for and regulate autonomous vehicles; how to help farmers respond to weather and climate change; and how to respond to the changing demand for oil and gas. It has pilot projects in the areas of preventing and mitigating flooding; finding a resilient policy for critical infrastructure and cyber security; and how to respond to a 24-hour-plus power outage. Part of what makes cyber security interesting is the fact that security practitioners compete against opponents, and the two groups learn from each other in an infinite system. What makes it uncertain is the lack of equilibrium stemming from the fact that neither group knows what the other will do next.

Tuckett went on to argue that while many people, including behavioural economists and “nudge” advocates, talk as though emotion has no place in practical decision making it’s important not to leave it out; emotion plays an important role in generating the conviction necessary to act. We evaluate the information we receive cognitively – but also with emotion. “Emotion is fundamental to good decision making,” Tuckett said. “It’s not noise in the system.” Human beings are extremely good at giving reasons to help convince ourselves, but real decision making takes place within a social context that includes factors such as concern for reputation.

The LSE’s Leonard Smith, who works on deep uncertainty, took over to explain the network’s approach to some sample problems. Smith used “weather” and “climate” to distinguish between two important classes of challenge: “weather” refers to daily problems, the kinds of decisions that are made once and then acted upon. Climate-like tasks are one-off, although they may have high potential impact. For example, the physicist Edward Teller, for example, did a calculation to determine whether the atomic bomb would ignite the atmosphere and burn it off the planet, and concluded it was very unlikely. Weather-like questions – for example, how many hunting licences to issue this year for reindeer in the interests of managing the population so it’s still thriving in 50 to 100 years – require models, and these can’t be relied upon the way metrics can be. Safety systems are more like climate systems, and in these climate-like problems you may never know whether you made a good investment. Users with different goals inside an organisation is a weather-like problem. In these cases, Smith said, weather-like models are “useless”, and climate ones are “not particularly informative either” – but the key is to distinguish between them. How does the fire control system pay off if there’s no fire? The security team thinks its approach is working – until the organisation is attacked.

Interactive session

The interactive session was led by Ine Steenmans, whose work focuses on integrating different types of intelligence about possible future change into public decision-making processes. Steenmans asked participants to take a seat at any of eight tables, each representing a group with a different lens: consumers, security experts, financiers, mischief makers, policy makers, academics, technologists, and manufacturers. Each table was asked to consider the following in the light of the scenarios and provocations previously presented:
– Their goals and motivations
– Their aspirations for the Internet of Things and the benefits they thought it would bring
– The consequences of a market shift to large vendors with “packages”
– New and emerging types of threats and harms.

Steenmans explained that one of the reasons for this approach is that when you ask people to imagine the concerns of the future, they tend to imagine them as extensions of their present concerns. Will we really still be focused on personal security and data privacy in 2030? Or will other problems and opportunities have presented themselves? The goal was to get the group to think laterally, differently, and comprehensively about what security will look like in 15 years. All the topics chosen for the day were things whose future is unknown to all of us, aiming to answer two questions:
– What can we do practically to engage with the ideas in the earlier presentations, thinking about principles rather than techniques?
– What insights can we get in terms of the Internet of Things from a security point of view?

In order to break the group out of their ordinary mode of thinking, Steenmans proposed the following scenario: it is 2030, and we are looking at a family that is fully equipped with “smart” devices, which pervade their home. They can opt into how fluid they want their lives to be; they are completely Internet of Things enabled so that these devices manage every part of their lives, from the clothing they put on in the morning to the meal they eat at night. They can work remotely knowing their home environment will take care of their infant child. The parents love this: they can sleep!

The large poster on each table offered participants four provocations to prompt discussion. One possible future might be that consumers become exempt from security responsibilities, because as systems become more complex the security community will find it unacceptable for a non-expert cohort to make decisions. Another was the idea of voluntary security standards that are implemented at the beginning of the design stage for each device. Should these be mandatory, and if they are, who is accountable?

A third possible future is one in which the Internet of Things becomes a market dominated by large package-focused vendors. What are the implications for costs, managing data flows, and keeping device software up to date? Consumers might push back against this option if it means that security can turn off a device. Finally, what kinds of new harms might emerge? Today, our first thought is loss of privacy but if you start thinking about a home caring for a child, the loss of personal human contact would be significant. Or, experience is an important teacher, so if suppliers assume all liability then consumers become dependent on them and don’t learn anything further.

Each group was asked to summarize their discussions by agreeing on a key security alert, what they imagined their future self would tell their 2018 self (“time machine”); and what they want to know in order to understand their future behaviour (“crystal ball”).

The consumers table’s key security alert was deskilling – that is, becoming so dependent on the technology that they lose the ability to compensate when it fails. They would advise their 2018 selves to spend more time understanding the benefits and value proposition. They wanted to know how we are going to deal with uncertainties about how liable users will be for external attacks.

Financiers were concerned that it’s unclear whether there is a market for security; the market that may exist for the perception of security relies on trust and clarity about liability. They wanted their 2018 selves to know which firms will succeed and which stocks to buy. And they wanted clarity on the regulatory framework: who owns the intellectual property and data?

Mischief makers’ key security alert was that. Their advice to their 2018 selves was to always look for the weak spots and players, to make sure everything is open source, and to watch for what’s *not* happening. The crystal ball was to ensure that they themselves were not using tracking devices so their movements are less transparent. However, they said, “We will adapt however it goes and find a way in?”

Policy makers’ security alert was about national ownership of citizens’ data and that a breach on a large scale could compromise national security. There are disadvantages to the UK economy from both over-regulation and under-regulation. For the time machine, they focused on debating the balance policy regulations must find. Their crystal ball was to ask who has the right to override the control over your data and physical space; for example, someone seizing control of your data could lock you out of your home.

Security experts asked what role they will have in a world ruled by security by design. “Will we have security experts in future, or will systems take care of themselves?” In their time machine discussion, they tried to understand where the Internet of Things might lead in the way that increased numbers of vehicles led to suburbs – perhaps “robot babies” based on our preferences? Their crystal ball was future standards. Today, these are safety, security, and reliability, but ultimately how can we ensure robustness?

The academics’ alert was different forms of harm based on the pace of change. Their time machine advice was not to give up on interdisciplinary jobs. Their crystal ball question was to ask what the key leverage points are that make research valuable.

The technologists believed that for their own protection from liability security needs to be taken away from consumers, though they asked how to achieve this. For their time machine, they wanted to know what necessary technologies are needed to provide acceptable service. For their crystal ball, they wanted to understand what is available now to empower humanity.

Finally, the manufacturers group’s key security alert was the increasing danger of compromise at the machine-to-machine system level, where normal communications and data flows are the target rather than consumers. For example, this might be a threat in automated systems that perceive need and manufacturer appropriately. They would tell their 2018 selves to concentrate on getting evidence-based confidence so that you get the outcomes you expect when you expect them, rather than relying on “security”. As their crystal ball, they advised finding ways to move quickly and fail fast and cheaply if necessary so that they can take gambles that do make money and develop a market.

On the large posters provided, each table also captured further detail of their discussions. As next steps, the researchers will use the day’s earlier presentations and the collected interactive data to consider four questions:
– What narratives should be challenged?
– Where are there conflicting long-term interests?
– What types of security incidents lend themselves to repeatable responses and best practice?
– Where are we at risk of treating emotions as noise?

A closing discussion highlighted a number of aspects of these discussions. The financiers’ table, for example, focused almost exclusively on market forces; they cared very little about anything else. The policy table was the opposite; that group focused on how to use politics to balance national security with the national economy. Two tables thought of designing “policy experiments” to compare different approaches against each other. One such was the suggestion of conscripting people to work on open source software (instead of the military). Finally, the academics’ table recognised there is still significant resistance in academia to multidisciplinary work.

Categories: Blog

Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries