The RISCS story so far…

The second phase of the Research Institute for the Science of Cyber Security (RISCS2) was launched in August 2016. To help understand its goals and focus, this posting outlines its background.

The first phase of RISCS (RISCS1) began in October 2012 with £3.8 million in funding over three and a half years from a partnership of GCHQ, the Department for Business, Innovation, and Skills, and the Engineering and Physical Sciences Research Council (EPSRC), part of the Research Councils’ Global Uncertainties Programme (RCUK). RISCS was tasked with creating an evidence base that would allow both the RISCS researchers and security practitioners to answer two questions:

– How secure is my organisation?
– How can I make better security decisions?

Many security practices are what UCL professor and RISCS director Angela Sasse calls “craft knowledge” – that is, habits handed down from one generation of security practitioners to another without much thought about changing circumstances and technology. “For a lot of things there’s no knowledge about what the costs and benefits are,” Sasse said at the RISCS launch.

In previous research, The Compliance Budget: Managing Security Behaviour in Organisations (PDF), Sasse, PhD student Adam Beautement, and Hewlett-Packard researcher Mike Wonham analysed the impact on users in economic terms. Security measures, they argued, must be assessed alongside all the other demands on a user’s time and attention. The user’s ability to comply – the “compliance budget” – is limited and needs to be managed like any other finite corporate resource.

Creating an evidence base requires a multi-disciplinary approach. Via four linked projects involving six universities and coordinated from UCL – Productive Security, Cyber Security Cartographies (CySeCa), Choice Architecture (ChaiSE), and Games and Abstraction – RISCS emphasised collaboration incorporating ideas from such diverse fields as data science, mathematical modelling, social sciences, psychology, and economics. Productive Security sought to identify hot spots where security controls hindered user productivity and find ways to make security work with users instead of against them. ChaiSE drew on psychology and explored the possibilities of using “nudges” to influence users to make better security decisions. Games and Abstraction used game theory and mathematical modelling to develop tools to compare the tradeoffs of differing choices of security controls. Finally, CySeCa contrasted the information flows between people with the information flows across the data network to find gaps and resiliences that are invisible using only one or the other.

At quarterly meetings, researchers shared their progress, and speakers from industry and government outlined the areas where they needed help, discussed practical applications of RISCS research, and outlined their own related work. The resulting community, which included 30 post-docs, found valuable the cross-pollination and open sharing of contacts, access, and feedback. RISCS’ output included 65 academic papers, 108 talks, 33 other dissemination activities, and information flow mapping and modelling tools (from CySeCa).

The methodology CySeCa developed in a case study with a government department showed that apparent violations of security policy were actually a result of primary processes and valuable information sharing essential to delivering the service. What was needed was to redesign that so it could be done securely. This work was successful enough that a similar exercise is being set up with a second government department.

RISCS also produced two well-received publications. Password Guidance, published by CPNI, is being widely adopted. Awareness is only the first step (PDF), a collaboration between RISCS, Hewlett-Packard Enterprise, and CESG, is intended to help organisations communicate effectively about risks. Based on smaller-scale experiments conducted by Productive Security and ChaiSE with SMEs, this guide points out the limits of the common approach to awareness, which warns of dangers but fails to implement the multi-step process necessary for accomplishing the more difficult task of changing behaviour. This guide has also been widely taken up. Finally, in September 2015 RISCS launched the open access, peer-reviewed Journal of Cybersecurity.

For RISCS2, which will run over five years from August 2016, community coordination is funded by EPSRC, contingent upon RISCS raising another £5 million over its lifetime. About half of that will come from GCHQ, the other half from externally funded projects. The first of these is the evidence-based, TIPS-funded Detecting and Preventing Mass-Marketing Fraud (DAPM), a project on preventing mass-market fraud led by Monica Whitty (Warwick). Also counting towards RISCS’ required funding is the TIPS Fellowship awarded to RISCS deputy director, Royal Holloway professor Lizzie Coles-Kemp.

RISCS2 will have three annual community meetings plus an academic conference shared with its siblings, the Research Institute in Automated Program Analysis and Verification (RIAPAV), led by Philippa Gardner (Imperial), and the Research Institute in Trustworthy Industrial Control Systems (RITICS), led by Chris Hankin (Imperial).

Alongside the advisory board, two new panels will help guide RISCS2. The practitioners panel, to be led by Royal Holloway senior lecturer Geraint Price, will draw members from people dealing with real problems inside organisations. Panel members will commit to attending meetings for at least a year to advise on how best to communicate results to practitioners and suggest research problems and questions, as well as advise what works and what doesn’t.

The knowledge exchange panel, led by Coles-Kemp, will work to make collaboration with members of other disciplines systematic. One of this panel’s first tasks will be to help translate between disciplines that use similar language but assign to it different meanings.

RISCS2 will broaden its scope from large organisations to include citizens, consumers, SMEs, charities, and communities. This is in line with other research, such as the July 2016 report from the Royal Society, which stressed that security cannot be viewed in isolation but must be considered as part of a construct that includes trust, trustworthiness, and privacy. Similarly, the government’s strategy is to broaden from national security and information assurance to supporting a resilient digital society as attacks increase in range, frequency, and sophistication. The CyberStreetwise team is also interested in taking new directions and collaborating, and the goal is to build a consortium with an increasing number of government and industrial organisations that speaks with one voice regarding security education.

GCHQ’s funding will cover both long-term and short-term (“task forces”) projects. The latter won’t necessarily be hands-on research; it may be delivering an authoritative statement in areas with conflicting evidence.

Finally, RISCS2 welcomes investment from companies funded under GCHQ’s CyberInvest scheme. Evidence-based research requires data, access, and testbeds, and Sasse believes RISCS’s track record shows it can be trusted. Its researchers have worked with some companies for as long as seven years and been able to publish the results without giving away sensitive information.

Theory plus practice

Geraint Price in February 2017

Geraint Price at the first RISCS practitioners panel in February 2017

At the first quarterly RISCS community meeting for 2017, Royal Holloway senior lecturer Geraint Price explained the purpose of the practitioners panel, which he leads. Collaboration, he said, is essential, so that the research RISCS academics undertake has practical relevance to the problems practitioners encounter every day, and so that practitioners can benefit from new insights as they occur.

Practitioners who want to join the community should email geraint.price@rhul.ac.uk briefly outlining their interest in RISCS’ activities and mentioning whether they want to join the practitioners panel or find out more.

Price began with a picture of a hammer: as the saying doesn’t quite go, when the tool you have is a hammer you hit everything you see, whether or not it looks like a nail. Many of the security tools in common use are like this – simplistic and dating to an era with different requirements, chiefly the military and financial sectors in the 1970s. Yet we keep using them anyway, despite the fact that we’re in a different era where many of our requirements have changed.

A key issue is the blinkered perspective caused by the division of disciplines into silos, even within science itself. “As a discipline, we’re drawing far too narrow boundaries,” Price said, going on to quote Leonardo da Vinci: “Learn how to see. Realize that everything connects to everything else.”

Price set out three examples of how changing perspectives and requirements can turn something that works into a disaster or make functional an idea previously dismissed: the de Havilland Comet; Ignaz Semmelweis’s insistence that washing hands between patients would eliminate many infections; and Barry Marshall’s claim that stomach ulcers were caused by bacteria rather than stress, spicy food, or too much stomach acid. In the first case, the de Havilland Comet, there were several instances when the world-first commercial jet airliner dropped out of the sky. These were traced to the combination of a slightly too-acute angle on the corners of the windows coupled with newly-reached higher speeds and altitudes. The combination caused stress fractures that ripped the plane apart; the incidents inspired many advances in materials science.

Semmelweis was right, we now know, but he failed to gain acceptance for his theories in the mid 19th century because the science to explain his findings didn’t exist yet. It was only some years after his death in an insane asylum that Louis Pasteur confirmed the germ theory that explained the effect Semmelweis accurately observed. Unfortunately for Semmelweis, the science at his disposal was not yet mature enough to provide the tools he needed to convince his peers.

Marshall was also correct but, unable to get approval for the necessary research, was ignored until he finally infected himself with h. pylori in order to prove his point. His case shows the way scientists can hold onto inaccurate beliefs for too long when proof is not forthcoming – a problem exacerbated in cyber security by the presence of a vendor industry that funds experts to promote those same beliefs.

Price argued that something of the same situation applies now to the “CIA triad: confidentiality, integrity, and availability. “We need a better way to look at security,” he said. “We are using 1970s ideas to solve 21st century problems.”

The UCL researcher John Adams identifies three kinds of risk (PDF): those that can be perceived directly (riding a bike); those that can be perceived through science (cholera, which requires a microscope); and those we cannot agree on and cannot perceive (for example, climate change or low-level radiation). Price argues that many of the risks we face in the cyber world fall into the third, virtual category, which makes it hard for both researchers and users alike to grapple with those problems.

The results of work done at Royal Holloway, some funded by RISCS, some by the TREsPASS project, suggest that it’s essential to embrace multiple stakeholders rather than the imposition of control from a single viewpoint that is common today. The RISCS Cyber Security Cartographies project used complementary views of the flow of information between people and across the data network to find gaps that would have escaped notice otherwise. TREsPASS has modelled these multiple perspectives in Lego to get a range of people to engage with designing the system; the result is to force them to explain the problems they have and their perspectives. The goal is to change the way people perceive risk.

Cyber security is an area where scientific roots are a problem. It’s not a hard science studying natural phenomena, even though it uses some of the techniques of scientific disciplines such as mathematics for cryptography and computer science for system engineering. Ultimately, however, the “things” security researchers study are all social/societal constructs. This must have some effect on the research paradigm, or work would have stopped at the one-time pad or the Bell-LaPadula model, which offered provably secure access control but was utterly unusable. The only way cyber security can move forward as a science is by listening to others – especially practitioners, who experience the problems at first hand.

In this collaboration, the researchers hope to gain:

  • case studies;
  • new ways of looking at the world;
  • help engaging with different disciplines such as law and others;
  • help showcasing the problems they can solve;
  • joint development of an enlarged toolkit.

In return, the researchers hope practitioners will gain:

  • the ability to help shape the future research agenda so it’s more relevant to their real-world needs;
  • engagement with testing and validating research outputs;
  • new ways of looking at the problems they encounter daily.

Price closed by imagining the state he hopes cyber security will have reached in 2042. By then, he hopes:

  • the field has tapped every discipline which can and should have an impact on information security;
  • methods to facilitate discussion among these disciplines have been developed, taking into account variations in language, style, and methodology;
  • a toolbox of RISCS-style projects has been developed, testing, and fielded;
  • academia and industry have a better track record of collaboration;
  • academia has developed a greater value for research that is interdisciplinary, practical, and explorative.

In the meantime, RISCS welcomes input from practitioners and other research projects.