The RISCS story so far…

The second phase of the Research Institute for the Science of Cyber Security (RISCS2) was launched in August 2016. To help understand its goals and focus, this posting outlines its background.

The first phase of RISCS (RISCS1) began in October 2012 with £3.8 million in funding over three and a half years from a partnership of GCHQ, the Department for Business, Innovation, and Skills, and the Engineering and Physical Sciences Research Council (EPSRC), part of the Research Councils’ Global Uncertainties Programme (RCUK). RISCS was tasked with creating an evidence base that would allow both the RISCS researchers and security practitioners to answer two questions:

– How secure is my organisation?
– How can I make better security decisions?

Many security practices are what UCL professor and RISCS director Angela Sasse calls “craft knowledge” – that is, habits handed down from one generation of security practitioners to another without much thought about changing circumstances and technology. “For a lot of things there’s no knowledge about what the costs and benefits are,” Sasse said at the RISCS launch.

In previous research, The Compliance Budget: Managing Security Behaviour in Organisations (PDF), Sasse, PhD student Adam Beautement, and Hewlett-Packard researcher Mike Wonham analysed the impact on users in economic terms. Security measures, they argued, must be assessed alongside all the other demands on a user’s time and attention. The user’s ability to comply – the “compliance budget” – is limited and needs to be managed like any other finite corporate resource.

Creating an evidence base requires a multi-disciplinary approach. Via four linked projects involving six universities and coordinated from UCL – Productive Security, Cyber Security Cartographies (CySeCa), Choice Architecture (ChaiSE), and Games and Abstraction – RISCS emphasised collaboration incorporating ideas from such diverse fields as data science, mathematical modelling, social sciences, psychology, and economics. Productive Security sought to identify hot spots where security controls hindered user productivity and find ways to make security work with users instead of against them. ChaiSE drew on psychology and explored the possibilities of using “nudges” to influence users to make better security decisions. Games and Abstraction used game theory and mathematical modelling to develop tools to compare the tradeoffs of differing choices of security controls. Finally, CySeCa contrasted the information flows between people with the information flows across the data network to find gaps and resiliences that are invisible using only one or the other.

At quarterly meetings, researchers shared their progress, and speakers from industry and government outlined the areas where they needed help, discussed practical applications of RISCS research, and outlined their own related work. The resulting community, which included 30 post-docs, found valuable the cross-pollination and open sharing of contacts, access, and feedback. RISCS’ output included 65 academic papers, 108 talks, 33 other dissemination activities, and information flow mapping and modelling tools (from CySeCa).

The methodology CySeCa developed in a case study with a government department showed that apparent violations of security policy were actually a result of primary processes and valuable information sharing essential to delivering the service. What was needed was to redesign that so it could be done securely. This work was successful enough that a similar exercise is being set up with a second government department.

RISCS also produced two well-received publications. Password Guidance, published by CPNI, is being widely adopted. Awareness is only the first step (PDF), a collaboration between RISCS, Hewlett-Packard Enterprise, and CESG, is intended to help organisations communicate effectively about risks. Based on smaller-scale experiments conducted by Productive Security and ChaiSE with SMEs, this guide points out the limits of the common approach to awareness, which warns of dangers but fails to implement the multi-step process necessary for accomplishing the more difficult task of changing behaviour. This guide has also been widely taken up. Finally, in September 2015 RISCS launched the open access, peer-reviewed Journal of Cybersecurity.

For RISCS2, which will run over five years from August 2016, community coordination is funded by EPSRC, contingent upon RISCS raising another £5 million over its lifetime. About half of that will come from GCHQ, the other half from externally funded projects. The first of these is the evidence-based, TIPS-funded Detecting and Preventing Mass-Marketing Fraud (DAPM), a project on preventing mass-market fraud led by Monica Whitty (Warwick). Also counting towards RISCS’ required funding is the TIPS Fellowship awarded to RISCS deputy director, Royal Holloway professor Lizzie Coles-Kemp.

RISCS2 will have three annual community meetings plus an academic conference shared with its siblings, the Research Institute in Automated Program Analysis and Verification (RIAPAV), led by Philippa Gardner (Imperial), and the Research Institute in Trustworthy Industrial Control Systems (RITICS), led by Chris Hankin (Imperial).

Alongside the advisory board, two new panels will help guide RISCS2. The practitioners panel, to be led by Royal Holloway senior lecturer Geraint Price, will draw members from people dealing with real problems inside organisations. Panel members will commit to attending meetings for at least a year to advise on how best to communicate results to practitioners and suggest research problems and questions, as well as advise what works and what doesn’t.

The knowledge exchange panel, led by Coles-Kemp, will work to make collaboration with members of other disciplines systematic. One of this panel’s first tasks will be to help translate between disciplines that use similar language but assign to it different meanings.

RISCS2 will broaden its scope from large organisations to include citizens, consumers, SMEs, charities, and communities. This is in line with other research, such as the July 2016 report from the Royal Society, which stressed that security cannot be viewed in isolation but must be considered as part of a construct that includes trust, trustworthiness, and privacy. Similarly, the government’s strategy is to broaden from national security and information assurance to supporting a resilient digital society as attacks increase in range, frequency, and sophistication. The CyberStreetwise team is also interested in taking new directions and collaborating, and the goal is to build a consortium with an increasing number of government and industrial organisations that speaks with one voice regarding security education.

GCHQ’s funding will cover both long-term and short-term (“task forces”) projects. The latter won’t necessarily be hands-on research; it may be delivering an authoritative statement in areas with conflicting evidence.

Finally, RISCS2 welcomes investment from companies funded under GCHQ’s CyberInvest scheme. Evidence-based research requires data, access, and testbeds, and Sasse believes RISCS’s track record shows it can be trusted. Its researchers have worked with some companies for as long as seven years and been able to publish the results without giving away sensitive information.

Theory plus practice

Geraint Price in February 2017

Geraint Price at the first RISCS practitioners panel in February 2017

At the first quarterly RISCS community meeting for 2017, Royal Holloway senior lecturer Geraint Price explained the purpose of the practitioners panel, which he leads. Collaboration, he said, is essential, so that the research RISCS academics undertake has practical relevance to the problems practitioners encounter every day, and so that practitioners can benefit from new insights as they occur.

Practitioners who want to join the community should email geraint.price@rhul.ac.uk briefly outlining their interest in RISCS’ activities and mentioning whether they want to join the practitioners panel or find out more.

Price began with a picture of a hammer: as the saying doesn’t quite go, when the tool you have is a hammer you hit everything you see, whether or not it looks like a nail. Many of the security tools in common use are like this – simplistic and dating to an era with different requirements, chiefly the military and financial sectors in the 1970s. Yet we keep using them anyway, despite the fact that we’re in a different era where many of our requirements have changed.

A key issue is the blinkered perspective caused by the division of disciplines into silos, even within science itself. “As a discipline, we’re drawing far too narrow boundaries,” Price said, going on to quote Leonardo da Vinci: “Learn how to see. Realize that everything connects to everything else.”

Price set out three examples of how changing perspectives and requirements can turn something that works into a disaster or make functional an idea previously dismissed: the de Havilland Comet; Ignaz Semmelweis’s insistence that washing hands between patients would eliminate many infections; and Barry Marshall’s claim that stomach ulcers were caused by bacteria rather than stress, spicy food, or too much stomach acid. In the first case, the de Havilland Comet, there were several instances when the world-first commercial jet airliner dropped out of the sky. These were traced to the combination of a slightly too-acute angle on the corners of the windows coupled with newly-reached higher speeds and altitudes. The combination caused stress fractures that ripped the plane apart; the incidents inspired many advances in materials science.

Semmelweis was right, we now know, but he failed to gain acceptance for his theories in the mid 19th century because the science to explain his findings didn’t exist yet. It was only some years after his death in an insane asylum that Louis Pasteur confirmed the germ theory that explained the effect Semmelweis accurately observed. Unfortunately for Semmelweis, the science at his disposal was not yet mature enough to provide the tools he needed to convince his peers.

Marshall was also correct but, unable to get approval for the necessary research, was ignored until he finally infected himself with h. pylori in order to prove his point. His case shows the way scientists can hold onto inaccurate beliefs for too long when proof is not forthcoming – a problem exacerbated in cyber security by the presence of a vendor industry that funds experts to promote those same beliefs.

Price argued that something of the same situation applies now to the “CIA triad: confidentiality, integrity, and availability. “We need a better way to look at security,” he said. “We are using 1970s ideas to solve 21st century problems.”

The UCL researcher John Adams identifies three kinds of risk (PDF): those that can be perceived directly (riding a bike); those that can be perceived through science (cholera, which requires a microscope); and those we cannot agree on and cannot perceive (for example, climate change or low-level radiation). Price argues that many of the risks we face in the cyber world fall into the third, virtual category, which makes it hard for both researchers and users alike to grapple with those problems.

The results of work done at Royal Holloway, some funded by RISCS, some by the TREsPASS project, suggest that it’s essential to embrace multiple stakeholders rather than the imposition of control from a single viewpoint that is common today. The RISCS Cyber Security Cartographies project used complementary views of the flow of information between people and across the data network to find gaps that would have escaped notice otherwise. TREsPASS has modelled these multiple perspectives in Lego to get a range of people to engage with designing the system; the result is to force them to explain the problems they have and their perspectives. The goal is to change the way people perceive risk.

Cyber security is an area where scientific roots are a problem. It’s not a hard science studying natural phenomena, even though it uses some of the techniques of scientific disciplines such as mathematics for cryptography and computer science for system engineering. Ultimately, however, the “things” security researchers study are all social/societal constructs. This must have some effect on the research paradigm, or work would have stopped at the one-time pad or the Bell-LaPadula model, which offered provably secure access control but was utterly unusable. The only way cyber security can move forward as a science is by listening to others – especially practitioners, who experience the problems at first hand.

In this collaboration, the researchers hope to gain:

  • case studies;
  • new ways of looking at the world;
  • help engaging with different disciplines such as law and others;
  • help showcasing the problems they can solve;
  • joint development of an enlarged toolkit.

In return, the researchers hope practitioners will gain:

  • the ability to help shape the future research agenda so it’s more relevant to their real-world needs;
  • engagement with testing and validating research outputs;
  • new ways of looking at the problems they encounter daily.

Price closed by imagining the state he hopes cyber security will have reached in 2042. By then, he hopes:

  • the field has tapped every discipline which can and should have an impact on information security;
  • methods to facilitate discussion among these disciplines have been developed, taking into account variations in language, style, and methodology;
  • a toolbox of RISCS-style projects has been developed, testing, and fielded;
  • academia and industry have a better track record of collaboration;
  • academia has developed a greater value for research that is interdisciplinary, practical, and explorative.

In the meantime, RISCS welcomes input from practitioners and other research projects.

The hardest of targets

At the official opening of the National Cyber Security Centre on February 14, opening speech, director Ciaran Martin expressed his hope that prospective attackers would come to think of the UK as the “hardest of targets”. The comment reflects the government’s strategy, which has broadened from national security to supporting a resilient digital society.

Angela Sasse at CPDP2017

Angela Sasse at CPDP2017

At the European Information Security Summit, RISCS director and UCL professor Angela Sasse, welcomed the opening, saying that “There should be a single authoritative source for advice.” The deputy director, Royal Holloway professor Lizzie Coles-Kemp, spoke about the importance of finding common language among disparate disciplines to create awareness across an organisation.

A crucial point, said Sasse is to “stop asking people to do impossible things”. Instead of continuing to blame users, security needs to emulate other areas of technology to support business processes and recognise that good design and appropriate tools are essential to helping people do the right thing. Sasse’s interest in usability and security goes back to 1999, when she and Anne Adams wrote the paper Users Are Not the Enemy. In 2006, Sasse, with Mike Wonham and Adam Beautement, followed up with the concept of the compliance budget, which framed user time and cognitive capacity as a finite organisational resource like any other.

NCSC’s recent revised password guidance is an example of both the kind of collaboration Martin talked about in his speech and Sasse’s approach. Much of the advice derives from work done at RISCS to incorporate usability principles into actionable guidance based on scientific evidence. In an August 2014 paper (PDF), Cormac Herley, Dinei Florencio (Microsoft Research), and Paul C. van Oorschot (Carleton University) studied the impact on users of standard requirements to use a unique random string for every password. In their mathematical analysis, attempting to follow this advice does not scale to the numbers of passwords many people have to cope with today. Managing 100 such passwords is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility for all but a very rare few.

Along with EPSRC, NCSC is a founding funder of this second phase of RISCS. In the first phase, RISCS was created to begin to build an evidence base for the science of cyber security. In its second phase, RISCS is different in two ways: first, it is broadening past its original purely organisational perspective to include consumers, citizens, SMEs, charities, and communities; second it is pursuing active collaboration outside academia via a practitioners panel led by Royal Holloway senior lecturer Geraint Price.

Over the coming years, this blog will publish news and commentary about both our own research and that of others with the goal of providing the community with the best up-to-date advice we can. We look forward to collaborating with the NCSC, with practitioners, and with the community at large.

Developer-Centred Security Call

Following the Developer-Centred Security Workshop in November, The National Cyber Security Centre (NCSC) is inviting proposals from academic researchers for research into the topic of Developer-Centred Security. Further information can be found here.

RISCS Hub RISCS Sponsors the 2016 International Symposium on Engineering Secure Software and Systems, ESSoS16

The Research Institute in Science of Cyber Security (RISCS) is pleased to annouce that it will be sponsoring the 2016 International Symposium on Engineering Secure Software and Systems, ESSoS16.

The goal of this symposium, which will be the eighth in the series, is to bring together researchers and practitioners to advance the states of the art and practice in secure software engineering. Being one of the few conference-level events dedicated to this topic, it explicitly aims to bridge the software engineering and security engineering communities, and promote cross-fertilization. The symposium will feature two days of technical program. In addition to academic papers, the symposium encourages submission of high-quality, informative industrial experience papers about successes and failures in security software engineering and the lessons learned. Furthermore, the symposium also accepts short idea papers that crisply describe a promising direction, approach, or insight.

Further details are available at https://distrinet.cs.kuleuven.be/events/essos/2016/ .

RISCS Hub White Paper Published Jointly by RISCS, Hewlett Packard Enterprise and CESG

Awareness is Only the First Step Thumbnail

The business white paper “Awareness is only the first step: A framework for progressive engagement of staff in cyber security” is the product of collaboration between RISCS researchers and security awareness experts at Hewlett Packard Enterprise (HPE), with oversight by the UK government’s National Technical Authority for Information Assurance (CESG).

Security communication, education, and training (CET) is meant to align employee behavior with the security goals of the organization, but it is not always designed in a way that can achieve this. The purpose of this paper is to set out a framework for security awareness that employees will actually engage with, and empower them to become the strongest link—rather than a vulnerability—in defending the organization.

A set of steps, required to deliver effective security CET as a natural part of an organization’s engagement with employees at all levels, is outlined. Depending on different needs, many vehicles are available from security games, quizzes, and brainteasers—and possibly prizes—to encourage employees to test their knowledge and explore in a playful manner. The most important output is that different approaches are needed for routine security tasks, and those tasks require application of existing security skills to new situations. There are many creative ways to improve security behaviors and culture, but it is essential to engage people in the right way. Then they can convert learning into tangible action and new behavior. Security CET needs to be properly resourced and regularly reviewed and updated to achieve lasting behavior change.

The report can be downloaded here.

RISCS Hub Inaugural Issue of the Journal of Cybersecurity Published

The inaugural issue of the Journal of Cybersecurity will be published online today, December 11th.   The Journal was created by RISCS members, in collaboration with colleagues in the UK and abroad, as a high-quality venue for publishing research into science of cyber security. The Journal welcomes submission of evidence-based research from all disciplinary backgrounds, and in particular multi-disciplinary research.