News >>

The hardest of targets

At the official opening of the National Cyber Security Centre on February 14, opening speech, director Ciaran Martin expressed his hope that prospective attackers would come to think of the UK as the “hardest of targets”. The comment reflects the government’s strategy, which has broadened from national security to supporting a resilient digital society.

Angela Sasse at CPDP2017

Angela Sasse at CPDP2017

At the European Information Security Summit, RISCS director and UCL professor Angela Sasse, welcomed the opening, saying that “There should be a single authoritative source for advice.” The deputy director, Royal Holloway professor Lizzie Coles-Kemp, spoke about the importance of finding common language among disparate disciplines to create awareness across an organisation.

A crucial point, said Sasse is to “stop asking people to do impossible things”. Instead of continuing to blame users, security needs to emulate other areas of technology to support business processes and recognise that good design and appropriate tools are essential to helping people do the right thing. Sasse’s interest in usability and security goes back to 1999, when she and Anne Adams wrote the paper Users Are Not the Enemy. In 2006, Sasse, with Mike Wonham and Adam Beautement, followed up with the concept of the compliance budget, which framed user time and cognitive capacity as a finite organisational resource like any other.

NCSC’s recent revised password guidance is an example of both the kind of collaboration Martin talked about in his speech and Sasse’s approach. Much of the advice derives from work done at RISCS to incorporate usability principles into actionable guidance based on scientific evidence. In an August 2014 paper (PDF), Cormac Herley, Dinei Florencio (Microsoft Research), and Paul C. van Oorschot (Carleton University) studied the impact on users of standard requirements to use a unique random string for every password. In their mathematical analysis, attempting to follow this advice does not scale to the numbers of passwords many people have to cope with today. Managing 100 such passwords is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility for all but a very rare few.

Along with EPSRC, NCSC is a founding funder of this second phase of RISCS. In the first phase, RISCS was created to begin to build an evidence base for the science of cyber security. In its second phase, RISCS is different in two ways: first, it is broadening past its original purely organisational perspective to include consumers, citizens, SMEs, charities, and communities; second it is pursuing active collaboration outside academia via a practitioners panel led by Royal Holloway senior lecturer Geraint Price.

Over the coming years, this blog will publish news and commentary about both our own research and that of others with the goal of providing the community with the best up-to-date advice we can. We look forward to collaborating with the NCSC, with practitioners, and with the community at large.

Developer-Centred Security Call

Following the Developer-Centred Security Workshop in November, The National Cyber Security Centre (NCSC) is inviting proposals from academic researchers for research into the topic of Developer-Centred Security. Further information can be found here.

RISCS Hub RISCS Sponsors the 2016 International Symposium on Engineering Secure Software and Systems, ESSoS16

The Research Institute in Science of Cyber Security (RISCS) is pleased to annouce that it will be sponsoring the 2016 International Symposium on Engineering Secure Software and Systems, ESSoS16.

The goal of this symposium, which will be the eighth in the series, is to bring together researchers and practitioners to advance the states of the art and practice in secure software engineering. Being one of the few conference-level events dedicated to this topic, it explicitly aims to bridge the software engineering and security engineering communities, and promote cross-fertilization. The symposium will feature two days of technical program. In addition to academic papers, the symposium encourages submission of high-quality, informative industrial experience papers about successes and failures in security software engineering and the lessons learned. Furthermore, the symposium also accepts short idea papers that crisply describe a promising direction, approach, or insight.

Further details are available at https://distrinet.cs.kuleuven.be/events/essos/2016/ .

RISCS Hub White Paper Published Jointly by RISCS, Hewlett Packard Enterprise and CESG

Awareness is Only the First Step Thumbnail

The business white paper “Awareness is only the first step: A framework for progressive engagement of staff in cyber security” is the product of collaboration between RISCS researchers and security awareness experts at Hewlett Packard Enterprise (HPE), with oversight by the UK government’s National Technical Authority for Information Assurance (CESG).

Security communication, education, and training (CET) is meant to align employee behavior with the security goals of the organization, but it is not always designed in a way that can achieve this. The purpose of this paper is to set out a framework for security awareness that employees will actually engage with, and empower them to become the strongest link—rather than a vulnerability—in defending the organization.

A set of steps, required to deliver effective security CET as a natural part of an organization’s engagement with employees at all levels, is outlined. Depending on different needs, many vehicles are available from security games, quizzes, and brainteasers—and possibly prizes—to encourage employees to test their knowledge and explore in a playful manner. The most important output is that different approaches are needed for routine security tasks, and those tasks require application of existing security skills to new situations. There are many creative ways to improve security behaviors and culture, but it is essential to engage people in the right way. Then they can convert learning into tangible action and new behavior. Security CET needs to be properly resourced and regularly reviewed and updated to achieve lasting behavior change.

The report can be downloaded here.

RISCS Hub Inaugural Issue of the Journal of Cybersecurity Published

The inaugural issue of the Journal of Cybersecurity will be published online today, December 11th.   The Journal was created by RISCS members, in collaboration with colleagues in the UK and abroad, as a high-quality venue for publishing research into science of cyber security. The Journal welcomes submission of evidence-based research from all disciplinary backgrounds, and in particular multi-disciplinary research.

RISCS HubFred Schneider Gives Keynote Talk at the UK Cyber Security Research Conference 2015

Professor Fred Schneider of Cornell University gave a keynote address to the UK Cyber Security Research Conference 2015 on 28th October. Schneider outlined joint work with Deirdre Mulligan (UC Berkeley) to create a doctrine of cyber security policy that would serve as a framework of legal principles which could be followed in making regulatory and economic decisions and that create incentives for designing and deploying trustworthy systems.

Schneider began by reviewing how cyber security “really evolved”. Cyber security began with 1960s time-sharing computers, which opened up concerns that one user could access another’s computation. The first idea for a solution was prevention: don’t build in vulnerabilities.

Failings rapidly emerged: there is no clear way to measure security, and this idea is unworkable for large, complex systems and impedes innovation. Even if those problems could be solved, preventing vulnerabilities ignores issues such as social engineering and the dynamic environment; everything would have to be reverified every time anything changed. As a result, the US government stopped investing in program verification, although interest has lately revived because of automated tools and verification may now be part of the solution.

The second idea was risk management – that is, investing in security to reduce expected losses. The problems here are that there’s no reliable way to estimate the probability of an attack and that there is no way to value losses like confidentiality, integrity, recovery, and costs to third parties. In an attack that harvests personal information in order to apply fraudulently for a credit card, the losses are clear-cut. But what are the costs of taking out a piece of the power grid when the components take a year to replace because they’re custom-made just-in-time? Further, the most attractive targets have incentives not to reveal they’ve been attacked. Finally, underinvesting in security turns out to be a rational strategy because individuals don’t reap the full benefit themselves and can’t control vulnerabilities in any case; even a good system ages. In life insurance, the past is a good predictor of the future – but not in software.

By the 2000s, cyber security had moved on to the doctrine of accountability: the idea that every packet should be attributable so that if it’s evil someone can be arrested. Like CCTV, this system enables retrospective attribution and punishment, not real-time security. Attribution is often not possible because of national borders, non-state actors, and the weak binding between machines and individual users, howerver. It’s also incomplete, as not all attacks are crimes (some may be acts of war), and the set of options for privacy is very narrow. This doctrine is still being debated – Schneider cited ID cards as an example – but has not gained much traction.

As a way of moving forward, Schneider and Mulligan began by considering how cyber security works from an economic perspective. In those terms, cyber security is a public good; that is, it’s non-rivalrous; and non-excludable. In public health, which also shares those qualities, even though no one thinks public health will ever be a solved problem the goals are clear: prompt production of health and manage its absence.

Where public health is about people, cyber security is about people plus computers. Its parallel goals, therefore, might be: prompt the production of cyber security; manage the remaining insecurity; and reach political agreement to balance individual rights and public welfare.

Many of the means for following this policy are already with us and in development: formal methods, testing, standards for development, analysis, testing, education, training, and certification. As social pressure has not been sufficient, Managing the remaining insecurity involves several strategies. Schneider and Mulligan propose incentives in the form of liability for producers. Patching needs to be reliable, rapidly deployed, and widespread; no standard requires software producers to provide patches, but they could be penalized for not doing so. Above the level of individual machines, lacking a way to patch critical parts of the infrastructure such as encryption, the entire infrastructure is brittle. Schneider also proposed a requirement for system diversity: even though it’s economically cheaper if everyone uses the same system, diversity can be engineered and would work in some, but not all, settings. In Schneider’s example, earlier versions of Microsoft Windows randomised the address space, a practice that ceased in Windows 7.

Taking further analogies from public health, Schneider and Mulligan suggest some possibilities: self-checks and self-monitoring for software and hardware and monitoring network traffic at significant boundaries, and coordination among ISPs to defend against attacks such as DDoS or stop machines from connecting that aren’t fully patched. ISPs currently have no incentive to do this. Techniques for filtering encrypted packets need to be studied.

A more complex issue that filtering raises is where to put the filters: should a nation have a firewall and what values should apply? Schneider cited China as a current example of a firewall, albeit in the service of censorship rather than security. Such isolation would be technically difficult to manage: the risk of balkanizing the internet is the protocols within individual pools may diverge and machines may work in some areas but not others.

Schneider concluded by stressing the importance of metaphors. If cyber attacks are crimes, that implies deterrence through accountability, offering no one incentives and leaving the work to those whose job it is to catch and punish the criminals. If cyber attacks are a disease, you take the approach of public cyber security – raising the level of security in all systems is better than raising it in just a few systems.

However, public cyber security will not work in the area where cyber attacks are warfare. The 1980s cold war maintained balance via the doctrine of “mutually assured destruction”, in which rational players would not attack. In the computer world, however, where it’s possible to destabilise the retaliatory attack, we live in a world of “mutually unassured destruction”, in which the incentive not to launch a pre-emptive attack is gone. This, Schneider said, is a different set of problems that needs a different set of incentives. However, the fundamental problem remains the same: that we are deploying untrustworthy systems.