Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

countries affected by wannacry

Countries initially affected by WannaCry. From Wikimedia Commons (user:Roke).

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?
.
Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed.. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

This article also appears on the Bentham’s Gaze blog.

Crossing the streams: Lizzie Coles-Kemp

Lizzie Coles-Kemp

Lizzie Coles-Kemp, deputy director of RISCS

A key goal of RISCS is to approach security from myriad angles. Among RISCS researchers are psychologists and human-computer interaction specialists, as well as representatives of more traditional disciplines such as mathematics and computer science. RISCS deputy director, Royal Holloway professor Lizzie Coles-Kemp, represents multiple disciplines all by herself.

This contention is easily borne out by just a small selection of Coles-Kemp’s work. For RISCS1, she led Cyber Security Cartographies (CySeCa), which compared social information sharing and network data traffic flows within an organisation to find gaps. She also led the visualisation work package in Technology-supported Risk Estimation by Predictive Assessment of Socio-technical Security (TREsPASS), which built an “attack navigator” to enable organisations to help security practitioners determine which attack opportunities are possible, which attacks are the most urgent to understand, and which countermeasures are most effective. For TREsPASS, Coles-Kemp’s team included a design critic and academic, an interactive design team, an artist, and three mathematicians. Together, they developed visualisations that reflected the work produced by the mathematical modeling and risk algorithm teams.

Coles-Kemp’s publications are equally multi-disciplinary. Her 2013 paper Granddaughter beware! An intergenerational case study of managing trust issues in the use of Facebook is a sociological study of privacy discussions between pairs of grandmothers and granddaughters and reveals the roles families and tools play in determining trust practices. The 2014 paper Watching You Watching Me: The Art of Playing the Panopticon, written with Alf Zugenmaier and Makayla Lewis, studied the impact of the monitoring and surveillance functionality built into many public services intended to protect the vulnerable. The researchers found that prioritising securing and monitoring the system makes the services’ users feel more insecure, and hinders the delivery of digital services. They concluded by arguing that such services must be designed to support the social networks their users interact with.

In a 2016 article with fellow TREsPASS member René Rydhof Hansen, Everyday Security: A Manifesto for New Approaches to Security Modelling Coles-Kemp argues that because people need both to produce and share information and to protect it in order to feel safe and secure, modelling everyday security is particularly complex. For this reason, a family of models is required to articulate people’s everyday security needs. Finally, in a paper written with Debi Ashenden, professor of cyber security at the University of Portsmouth and the lead for protective security and risk at the Centre for Research and Evidence on Security Threats (CREST) and presented at the 2017 Academic Archers conference, Coles-Kemp and Ashenden dispute the frequently-made assertion that social media are absent from the fictional world of the BBC’s long-running radio soap opera, The Archers, and explore what the show’s characters and their world can tell us about what security means to people in their everyday lives.

The path that led to this unusual approach to security began with a humanities degree in Scandinavian studies and linguistics from the University of Hull. After working briefly in theatre administration, an office temp job led Coles-Kemp to Uniplex, a software company that made a Unix equivalent of Microsoft Office. When the Swedish military needed a secure version of the software, Coles-Kemp’s fluent Swedish meant she was drafted in from training to help with porting and translating it.

Getting it to work on a secure platform was a complex job that piqued Coles-Kemp’s interest: “I got heavily involved with understanding how the secure version of the operating system was designed.”

Coles-Kemp believes that the fact that she only spoke about security in Swedish for the first few years has influenced how she thinks about the subject to this day.

“Linguistically, it does frame how you understand the concepts, particularly structure. When you’re talking about access control in Swedish it’s a different logic than when you talk about it in Anglo-Saxon languages,” she says. Partly, this is because the same word, “säkerhet”, can apply to both safety and security. Plus, “In the Scandinavian view of the world there is often a much more socio-technical bent for thinking about security. It’s a tradition that goes back to the 1970s and the early Scandinavian thinking about software design and interaction.” She went on to work for Dynasoft, a Swedish software house producing Unix access control products, which by the mid-1990s meant smart cards and a forerunner of public key infrastructure. Coles-Kemp ran Dynasoft’s UK subsidiary, winning the 1997 Oxfordshire Business Woman of the Year award.

In 1997, after the company was sold to Security Dynamics (later RSA Security), she became the security manager for the British Council and began an MSc at Royal Holloway. The former showed her that no two risk assessments worked the same way. As a result, “I became very interested in how organisational security processes work, what makes a risk assessment or audit process effective, and what ‘effective’ is.” She focused on these issues for her PhD at King’s College London, still very much a practitioner when she finished it in 2008. Her contemporaneous work for Lloyds Register Quality Assurance (LRQA) focused on ISO 27001 security management assessment for a wide range of organisations including one of the private hospital chains.

“Health care is fascinating because the need for clinical governance is completely enmeshed with security governance. You have to think about security from the perspective of the clinical, and information-sharing needs change as the patient’s condition changes.”

Her academic career began in 2005, when she began teaching undergraduates part-time at Royal Holloway; she moved to full-time in 2008. On arrival, she applied to participate in a “sandpit” run by the Engineering and Physical Sciences Research Council (EPSRC), the Economic and Social Research Council (ESRC), and the Technology Strategy Board. Coles-Kemp was part of a successful funding bid that emerged from this five-day immersive environment in which researchers collaborated on developing research questions, forming new teams, and preparing proposals. Led by Coles-Kemp, Visualisation and Other Methods of Expression (VOME) studied why people share what they do online and what they view as protection. Her remit: cover under-served communities. In partnership with Ashenden and Alison Adams, the Universities of Salford and Cranfield, the consultancy Consult Hyperion, and Sunderland City Council, Coles-Kemp worked directly with hard-to-reach communities such as the long-term unemployed in socio-economically deprived areas. In that environment, traditional research tools like focus groups and surveys were little help; new methods were needed

“We weren’t understanding what was of interest to those communities about data sharing because we were making all sorts of assumptions about what was important to them, and we had to get that out of the way to really understand data sharing in this context.”

For example, in these communities, few imagined they had much realistic chance of employment – so the risk that what they posted online might damage those prospects was meaningless. Similarly, in families who have been physically close for generations it often made more sense, for both safety and security, to share passwords. Coles-Kemp often heard, “We share a lot of other stuff.” The result was, “We got close enough to the communities to understand that it’s not that clear-cut, and we have to think about the overall safety and security of the individual within the family unit.”

Cartoon of Lizzie Coles-Kemp

Lizzie Coles-Kemp, drawn by Makayla Lewis

Their solution happened almost by accident. In VOME’s first year, ESRC offered a bursary to take part in a festival of social science. The VOME group partnered with the theatre company Bimbilibausa, led by clown Freya Stang, to present a short play about privacy choices in the workplace based on their research to date. The group took the play to Sunderland and invited the participants they had worked with to use the council’s voting paddles to select the story’s privacy outcome. Because whole families attended, the play led to intergenerational conversations about privacy and a meta-narrative that showed Coles-Kemp’s team the value of creative engagement techniques. The results encouraged Coles-Kemp to continue working with researchers and artists to develop a range of creative methods, including story sheets and Legos, to create three to four provocations or open questions that then let them drill down into individual issues. This work led to the grandmother-granddaughter paper, developed the understanding that led the work for the panopticon paper, revealed the complexity of everyday security and therefore the need for a family of information security models, and highlighted the importance of community and family interactions such as dominates narratives such as those found in The Archers when regulating the flow of information.

Creative engagement methods have both utility to the participant communities and methodological value. A further study, funded by the Arts and Humanities Research Council (AHRC), focused on families separated by prison sentences with the goal of understanding why they didn’t engage with the support services provided to them. In this case, the families proved to be more interested in talking about the journeys involved in prison visiting. “We went with that, figuring that if support services were important that would manifest itself,” Coles-Kemp says. The group worked with one of the Northeast England prisons to develop questions and create a large wall collage that is still in use as part of rehabilitation training when offenders are set to leave prison as well as a series of story cubes which form part of visitor induction to help families understand the kinds of issues that will confront them and introduce the support that’s available.

The creative engagement described here – story cubes, collages, drawings, Lego building – remains part of Coles-Kemp’s practice. CySeCa’s researchers, for example, included Makayla Lewis, who used her sketch noting and HCI and User Experience expertise to create cartoons based on interviews with security practitioners. These were then used to initiate discussions that exposed the information flows among people; the results were then compared to the results of network traffic analysis to find policy conflicts and gaps. In September 2016, Coles-Kemp started a five-year, EPSRC-funded fellowship programme to develop these techniques in conjunction with wider political and sociological theories of security in order to design and evaluate alternative approaches to securing digital services. Her work in this programme focuses on essential public services including welfare, health, housing, employment, education, and criminal justice. Coles-Kemp will continue to work with academic and practitioner communities in RISCS to both develop and disseminate these theoretical frameworks, practical techniques, and expertise.

The secondary questions security gap

Angela Sasse at CPDP2017

Angela Sasse at CPDP2017

The BBC reports that a common pastime on Facebook, comparing users’ top ten concerts, may present a security risk. The reason lies in the secondary security questions many websites use as fallback measures to identify users who have forgotten their passwords. Among the standard questions websites prompt users to provide answers for are the first gig you attended, your mother’s maiden name, your favourite movie, or the name of your first pet,

Quoted in the story, RISCS director and UCL professor Angela Sasse notes that it’s fairer to blame the sites for security breaches than individuals, arguing that using information that may be publicly available violates good security principles. In the past, similar stories have surfaced in the past relating to other social media trends, such as posting your “porn name” – which is typically made up of the name of your first pet coupled with the name of the street you grew up on.

Sasse told the BBC, “The risk is not so much publishing these lists, rather that somebody thinks it is a good idea to use questions like that as security credentials.”

An ancillary problem is that many sites ask the same questions, and in case of a data breach those answers can be used to gain access to other accounts the user holds.

At the National Cyber Security Centre blog, Kate R expands on how site owners and developers might manage these security questions so they leave less of a gap in security. First, she says, try to find alternatives. If that’s not possible, avoid questions with easily guessable answers that attackers can exploit. Dynamic questions, which depend on answers generated from data sites already hold may be a more secure choice than static questions if the pool of possible answers is large enough. Consider whether users can remember the answers they give, whether they are likely to use the same answers elsewhere, and how much effort the system will require of users.

Steven J. Murdoch

Steven J. Murdoch

On the Bentham’s Gaze blog, UCL Royal Society University Research Fellow Steven J. Murdoch expands on the theme that companies should stop passing the buck to consumers. In a discussion of standard security advice that’s unfit for the real world, he provides some useful advice. For example, he says password re-use across sites is a bigger problem than choosing passwords that are simple enough to remember; he recommends remembering unique passwords for the few most important sites, such as banking and email, and using a password manager for the rest. Similarly, although security experts typically tell users not to write down or share their passwords, this is poor advice within the context of a family, where doing so can be important. Murdoch goes on to discuss the difficulties of giving good security advice when individuals have so little control over the quality of the security measures imposed on them by others such as banks, lenders, mobile phone handset manufacturers, and so on.

The RISCS story so far…

The second phase of the Research Institute for the Science of Cyber Security (RISCS2) was launched in August 2016. To help understand its goals and focus, this posting outlines its background.

The first phase of RISCS (RISCS1) began in October 2012 with £3.8 million in funding over three and a half years from a partnership of GCHQ, the Department for Business, Innovation, and Skills, and the Engineering and Physical Sciences Research Council (EPSRC), part of the Research Councils’ Global Uncertainties Programme (RCUK). RISCS was tasked with creating an evidence base that would allow both the RISCS researchers and security practitioners to answer two questions:

– How secure is my organisation?
– How can I make better security decisions?

Many security practices are what UCL professor and RISCS director Angela Sasse calls “craft knowledge” – that is, habits handed down from one generation of security practitioners to another without much thought about changing circumstances and technology. “For a lot of things there’s no knowledge about what the costs and benefits are,” Sasse said at the RISCS launch.

In previous research, The Compliance Budget: Managing Security Behaviour in Organisations (PDF), Sasse, PhD student Adam Beautement, and Hewlett-Packard researcher Mike Wonham analysed the impact on users in economic terms. Security measures, they argued, must be assessed alongside all the other demands on a user’s time and attention. The user’s ability to comply – the “compliance budget” – is limited and needs to be managed like any other finite corporate resource.

Creating an evidence base requires a multi-disciplinary approach. Via four linked projects involving six universities and coordinated from UCL – Productive Security, Cyber Security Cartographies (CySeCa), Choice Architecture (ChaiSE), and Games and Abstraction – RISCS emphasised collaboration incorporating ideas from such diverse fields as data science, mathematical modelling, social sciences, psychology, and economics. Productive Security sought to identify hot spots where security controls hindered user productivity and find ways to make security work with users instead of against them. ChaiSE drew on psychology and explored the possibilities of using “nudges” to influence users to make better security decisions. Games and Abstraction used game theory and mathematical modelling to develop tools to compare the tradeoffs of differing choices of security controls. Finally, CySeCa contrasted the information flows between people with the information flows across the data network to find gaps and resiliences that are invisible using only one or the other.

At quarterly meetings, researchers shared their progress, and speakers from industry and government outlined the areas where they needed help, discussed practical applications of RISCS research, and outlined their own related work. The resulting community, which included 30 post-docs, found valuable the cross-pollination and open sharing of contacts, access, and feedback. RISCS’ output included 65 academic papers, 108 talks, 33 other dissemination activities, and information flow mapping and modelling tools (from CySeCa).

The methodology CySeCa developed in a case study with a government department showed that apparent violations of security policy were actually a result of primary processes and valuable information sharing essential to delivering the service. What was needed was to redesign that so it could be done securely. This work was successful enough that a similar exercise is being set up with a second government department.

RISCS also produced two well-received publications. Password Guidance, published by CPNI, is being widely adopted. Awareness is only the first step (PDF), a collaboration between RISCS, Hewlett-Packard Enterprise, and CESG, is intended to help organisations communicate effectively about risks. Based on smaller-scale experiments conducted by Productive Security and ChaiSE with SMEs, this guide points out the limits of the common approach to awareness, which warns of dangers but fails to implement the multi-step process necessary for accomplishing the more difficult task of changing behaviour. This guide has also been widely taken up. Finally, in September 2015 RISCS launched the open access, peer-reviewed Journal of Cybersecurity.

For RISCS2, which will run over five years from August 2016, community coordination is funded by EPSRC, contingent upon RISCS raising another £5 million over its lifetime. About half of that will come from GCHQ, the other half from externally funded projects. The first of these is the evidence-based, TIPS-funded Detecting and Preventing Mass-Marketing Fraud (DAPM), a project on preventing mass-market fraud led by Monica Whitty (Warwick). Also counting towards RISCS’ required funding is the TIPS Fellowship awarded to RISCS deputy director, Royal Holloway professor Lizzie Coles-Kemp.

RISCS2 will have three annual community meetings plus an academic conference shared with its siblings, the Research Institute in Automated Program Analysis and Verification (RIAPAV), led by Philippa Gardner (Imperial), and the Research Institute in Trustworthy Industrial Control Systems (RITICS), led by Chris Hankin (Imperial).

Alongside the advisory board, two new panels will help guide RISCS2. The practitioners panel, to be led by Royal Holloway senior lecturer Geraint Price, will draw members from people dealing with real problems inside organisations. Panel members will commit to attending meetings for at least a year to advise on how best to communicate results to practitioners and suggest research problems and questions, as well as advise what works and what doesn’t.

The knowledge exchange panel, led by Coles-Kemp, will work to make collaboration with members of other disciplines systematic. One of this panel’s first tasks will be to help translate between disciplines that use similar language but assign to it different meanings.

RISCS2 will broaden its scope from large organisations to include citizens, consumers, SMEs, charities, and communities. This is in line with other research, such as the July 2016 report from the Royal Society, which stressed that security cannot be viewed in isolation but must be considered as part of a construct that includes trust, trustworthiness, and privacy. Similarly, the government’s strategy is to broaden from national security and information assurance to supporting a resilient digital society as attacks increase in range, frequency, and sophistication. The CyberStreetwise team is also interested in taking new directions and collaborating, and the goal is to build a consortium with an increasing number of government and industrial organisations that speaks with one voice regarding security education.

GCHQ’s funding will cover both long-term and short-term (“task forces”) projects. The latter won’t necessarily be hands-on research; it may be delivering an authoritative statement in areas with conflicting evidence.

Finally, RISCS2 welcomes investment from companies funded under GCHQ’s CyberInvest scheme. Evidence-based research requires data, access, and testbeds, and Sasse believes RISCS’s track record shows it can be trusted. Its researchers have worked with some companies for as long as seven years and been able to publish the results without giving away sensitive information.

Theory plus practice

Geraint Price in February 2017

Geraint Price at the first RISCS practitioners panel in February 2017

At the first quarterly RISCS community meeting for 2017, Royal Holloway senior lecturer Geraint Price explained the purpose of the practitioners panel, which he leads. Collaboration, he said, is essential, so that the research RISCS academics undertake has practical relevance to the problems practitioners encounter every day, and so that practitioners can benefit from new insights as they occur.

Practitioners who want to join the community should email geraint.price@rhul.ac.uk briefly outlining their interest in RISCS’ activities and mentioning whether they want to join the practitioners panel or find out more.

Price began with a picture of a hammer: as the saying doesn’t quite go, when the tool you have is a hammer you hit everything you see, whether or not it looks like a nail. Many of the security tools in common use are like this – simplistic and dating to an era with different requirements, chiefly the military and financial sectors in the 1970s. Yet we keep using them anyway, despite the fact that we’re in a different era where many of our requirements have changed.

A key issue is the blinkered perspective caused by the division of disciplines into silos, even within science itself. “As a discipline, we’re drawing far too narrow boundaries,” Price said, going on to quote Leonardo da Vinci: “Learn how to see. Realize that everything connects to everything else.”

Price set out three examples of how changing perspectives and requirements can turn something that works into a disaster or make functional an idea previously dismissed: the de Havilland Comet; Ignaz Semmelweis’s insistence that washing hands between patients would eliminate many infections; and Barry Marshall’s claim that stomach ulcers were caused by bacteria rather than stress, spicy food, or too much stomach acid. In the first case, the de Havilland Comet, there were several instances when the world-first commercial jet airliner dropped out of the sky. These were traced to the combination of a slightly too-acute angle on the corners of the windows coupled with newly-reached higher speeds and altitudes. The combination caused stress fractures that ripped the plane apart; the incidents inspired many advances in materials science.

Semmelweis was right, we now know, but he failed to gain acceptance for his theories in the mid 19th century because the science to explain his findings didn’t exist yet. It was only some years after his death in an insane asylum that Louis Pasteur confirmed the germ theory that explained the effect Semmelweis accurately observed. Unfortunately for Semmelweis, the science at his disposal was not yet mature enough to provide the tools he needed to convince his peers.

Marshall was also correct but, unable to get approval for the necessary research, was ignored until he finally infected himself with h. pylori in order to prove his point. His case shows the way scientists can hold onto inaccurate beliefs for too long when proof is not forthcoming – a problem exacerbated in cyber security by the presence of a vendor industry that funds experts to promote those same beliefs.

Price argued that something of the same situation applies now to the “CIA triad: confidentiality, integrity, and availability. “We need a better way to look at security,” he said. “We are using 1970s ideas to solve 21st century problems.”

The UCL researcher John Adams identifies three kinds of risk (PDF): those that can be perceived directly (riding a bike); those that can be perceived through science (cholera, which requires a microscope); and those we cannot agree on and cannot perceive (for example, climate change or low-level radiation). Price argues that many of the risks we face in the cyber world fall into the third, virtual category, which makes it hard for both researchers and users alike to grapple with those problems.

The results of work done at Royal Holloway, some funded by RISCS, some by the TREsPASS project, suggest that it’s essential to embrace multiple stakeholders rather than the imposition of control from a single viewpoint that is common today. The RISCS Cyber Security Cartographies project used complementary views of the flow of information between people and across the data network to find gaps that would have escaped notice otherwise. TREsPASS has modelled these multiple perspectives in Lego to get a range of people to engage with designing the system; the result is to force them to explain the problems they have and their perspectives. The goal is to change the way people perceive risk.

Cyber security is an area where scientific roots are a problem. It’s not a hard science studying natural phenomena, even though it uses some of the techniques of scientific disciplines such as mathematics for cryptography and computer science for system engineering. Ultimately, however, the “things” security researchers study are all social/societal constructs. This must have some effect on the research paradigm, or work would have stopped at the one-time pad or the Bell-LaPadula model, which offered provably secure access control but was utterly unusable. The only way cyber security can move forward as a science is by listening to others – especially practitioners, who experience the problems at first hand.

In this collaboration, the researchers hope to gain:

  • case studies;
  • new ways of looking at the world;
  • help engaging with different disciplines such as law and others;
  • help showcasing the problems they can solve;
  • joint development of an enlarged toolkit.

In return, the researchers hope practitioners will gain:

  • the ability to help shape the future research agenda so it’s more relevant to their real-world needs;
  • engagement with testing and validating research outputs;
  • new ways of looking at the problems they encounter daily.

Price closed by imagining the state he hopes cyber security will have reached in 2042. By then, he hopes:

  • the field has tapped every discipline which can and should have an impact on information security;
  • methods to facilitate discussion among these disciplines have been developed, taking into account variations in language, style, and methodology;
  • a toolbox of RISCS-style projects has been developed, testing, and fielded;
  • academia and industry have a better track record of collaboration;
  • academia has developed a greater value for research that is interdisciplinary, practical, and explorative.

In the meantime, RISCS welcomes input from practitioners and other research projects.