The second phase of the Research Institute for the Science of Cyber Security (RISCS2) was launched in August 2016. To help understand its goals and focus, this posting outlines its background.
The first phase of RISCS (RISCS1) began in October 2012 with £3.8 million in funding over three and a half years from a partnership of GCHQ, the Department for Business, Innovation, and Skills, and the Engineering and Physical Sciences Research Council (EPSRC), part of the Research Councils’ Global Uncertainties Programme (RCUK). RISCS was tasked with creating an evidence base that would allow both the RISCS researchers and security practitioners to answer two questions:
– How secure is my organisation?
– How can I make better security decisions?
Many security practices are what UCL professor and RISCS director Angela Sasse calls “craft knowledge” – that is, habits handed down from one generation of security practitioners to another without much thought about changing circumstances and technology. “For a lot of things there’s no knowledge about what the costs and benefits are,” Sasse said at the RISCS launch.
In previous research, The Compliance Budget: Managing Security Behaviour in Organisations (PDF), Sasse, PhD student Adam Beautement, and Hewlett-Packard researcher Mike Wonham analysed the impact on users in economic terms. Security measures, they argued, must be assessed alongside all the other demands on a user’s time and attention. The user’s ability to comply – the “compliance budget” – is limited and needs to be managed like any other finite corporate resource.
Creating an evidence base requires a multi-disciplinary approach. Via four linked projects involving six universities and coordinated from UCL – Productive Security, Cyber Security Cartographies (CySeCa), Choice Architecture (ChaiSE), and Games and Abstraction – RISCS emphasised collaboration incorporating ideas from such diverse fields as data science, mathematical modelling, social sciences, psychology, and economics. Productive Security sought to identify hot spots where security controls hindered user productivity and find ways to make security work with users instead of against them. ChaiSE drew on psychology and explored the possibilities of using “nudges” to influence users to make better security decisions. Games and Abstraction used game theory and mathematical modelling to develop tools to compare the tradeoffs of differing choices of security controls. Finally, CySeCa contrasted the information flows between people with the information flows across the data network to find gaps and resiliences that are invisible using only one or the other.
At quarterly meetings, researchers shared their progress, and speakers from industry and government outlined the areas where they needed help, discussed practical applications of RISCS research, and outlined their own related work. The resulting community, which included 30 post-docs, found valuable the cross-pollination and open sharing of contacts, access, and feedback. RISCS’ output included 65 academic papers, 108 talks, 33 other dissemination activities, and information flow mapping and modelling tools (from CySeCa).
The methodology CySeCa developed in a case study with a government department showed that apparent violations of security policy were actually a result of primary processes and valuable information sharing essential to delivering the service. What was needed was to redesign that so it could be done securely. This work was successful enough that a similar exercise is being set up with a second government department.
RISCS also produced two well-received publications. Password Guidance, published by CPNI, is being widely adopted. Awareness is only the first step (PDF), a collaboration between RISCS, Hewlett-Packard Enterprise, and CESG, is intended to help organisations communicate effectively about risks. Based on smaller-scale experiments conducted by Productive Security and ChaiSE with SMEs, this guide points out the limits of the common approach to awareness, which warns of dangers but fails to implement the multi-step process necessary for accomplishing the more difficult task of changing behaviour. This guide has also been widely taken up. Finally, in September 2015 RISCS launched the open access, peer-reviewed Journal of Cybersecurity.
For RISCS2, which will run over five years from August 2016, community coordination is funded by EPSRC, contingent upon RISCS raising another £5 million over its lifetime. About half of that will come from GCHQ, the other half from externally funded projects. The first of these is the evidence-based, TIPS-funded Detecting and Preventing Mass-Marketing Fraud (DAPM), a project on preventing mass-market fraud led by Monica Whitty (Warwick). Also counting towards RISCS’ required funding is the TIPS Fellowship awarded to RISCS deputy director, Royal Holloway professor Lizzie Coles-Kemp.
RISCS2 will have three annual community meetings plus an academic conference shared with its siblings, the Research Institute in Automated Program Analysis and Verification (RIAPAV), led by Philippa Gardner (Imperial), and the Research Institute in Trustworthy Industrial Control Systems (RITICS), led by Chris Hankin (Imperial).
Alongside the advisory board, two new panels will help guide RISCS2. The practitioners panel, to be led by Royal Holloway senior lecturer Geraint Price, will draw members from people dealing with real problems inside organisations. Panel members will commit to attending meetings for at least a year to advise on how best to communicate results to practitioners and suggest research problems and questions, as well as advise what works and what doesn’t.
The knowledge exchange panel, led by Coles-Kemp, will work to make collaboration with members of other disciplines systematic. One of this panel’s first tasks will be to help translate between disciplines that use similar language but assign to it different meanings.
RISCS2 will broaden its scope from large organisations to include citizens, consumers, SMEs, charities, and communities. This is in line with other research, such as the July 2016 report from the Royal Society, which stressed that security cannot be viewed in isolation but must be considered as part of a construct that includes trust, trustworthiness, and privacy. Similarly, the government’s strategy is to broaden from national security and information assurance to supporting a resilient digital society as attacks increase in range, frequency, and sophistication. The CyberStreetwise team is also interested in taking new directions and collaborating, and the goal is to build a consortium with an increasing number of government and industrial organisations that speaks with one voice regarding security education.
GCHQ’s funding will cover both long-term and short-term (“task forces”) projects. The latter won’t necessarily be hands-on research; it may be delivering an authoritative statement in areas with conflicting evidence.
Finally, RISCS2 welcomes investment from companies funded under GCHQ’s CyberInvest scheme. Evidence-based research requires data, access, and testbeds, and Sasse believes RISCS’s track record shows it can be trusted. Its researchers have worked with some companies for as long as seven years and been able to publish the results without giving away sensitive information.