News >>

RISCS HubFred Schneider Gives Keynote Talk at the UK Cyber Security Research Conference 2015

Professor Fred Schneider of Cornell University gave a keynote address to the UK Cyber Security Research Conference 2015 on 28th October. Schneider outlined joint work with Deirdre Mulligan (UC Berkeley) to create a doctrine of cyber security policy that would serve as a framework of legal principles which could be followed in making regulatory and economic decisions and that create incentives for designing and deploying trustworthy systems.

Schneider began by reviewing how cyber security “really evolved”. Cyber security began with 1960s time-sharing computers, which opened up concerns that one user could access another’s computation. The first idea for a solution was prevention: don’t build in vulnerabilities.

Failings rapidly emerged: there is no clear way to measure security, and this idea is unworkable for large, complex systems and impedes innovation. Even if those problems could be solved, preventing vulnerabilities ignores issues such as social engineering and the dynamic environment; everything would have to be reverified every time anything changed. As a result, the US government stopped investing in program verification, although interest has lately revived because of automated tools and verification may now be part of the solution.

The second idea was risk management – that is, investing in security to reduce expected losses. The problems here are that there’s no reliable way to estimate the probability of an attack and that there is no way to value losses like confidentiality, integrity, recovery, and costs to third parties. In an attack that harvests personal information in order to apply fraudulently for a credit card, the losses are clear-cut. But what are the costs of taking out a piece of the power grid when the components take a year to replace because they’re custom-made just-in-time? Further, the most attractive targets have incentives not to reveal they’ve been attacked. Finally, underinvesting in security turns out to be a rational strategy because individuals don’t reap the full benefit themselves and can’t control vulnerabilities in any case; even a good system ages. In life insurance, the past is a good predictor of the future – but not in software.

By the 2000s, cyber security had moved on to the doctrine of accountability: the idea that every packet should be attributable so that if it’s evil someone can be arrested. Like CCTV, this system enables retrospective attribution and punishment, not real-time security. Attribution is often not possible because of national borders, non-state actors, and the weak binding between machines and individual users, howerver. It’s also incomplete, as not all attacks are crimes (some may be acts of war), and the set of options for privacy is very narrow. This doctrine is still being debated – Schneider cited ID cards as an example – but has not gained much traction.

As a way of moving forward, Schneider and Mulligan began by considering how cyber security works from an economic perspective. In those terms, cyber security is a public good; that is, it’s non-rivalrous; and non-excludable. In public health, which also shares those qualities, even though no one thinks public health will ever be a solved problem the goals are clear: prompt production of health and manage its absence.

Where public health is about people, cyber security is about people plus computers. Its parallel goals, therefore, might be: prompt the production of cyber security; manage the remaining insecurity; and reach political agreement to balance individual rights and public welfare.

Many of the means for following this policy are already with us and in development: formal methods, testing, standards for development, analysis, testing, education, training, and certification. As social pressure has not been sufficient, Managing the remaining insecurity involves several strategies. Schneider and Mulligan propose incentives in the form of liability for producers. Patching needs to be reliable, rapidly deployed, and widespread; no standard requires software producers to provide patches, but they could be penalized for not doing so. Above the level of individual machines, lacking a way to patch critical parts of the infrastructure such as encryption, the entire infrastructure is brittle. Schneider also proposed a requirement for system diversity: even though it’s economically cheaper if everyone uses the same system, diversity can be engineered and would work in some, but not all, settings. In Schneider’s example, earlier versions of Microsoft Windows randomised the address space, a practice that ceased in Windows 7.

Taking further analogies from public health, Schneider and Mulligan suggest some possibilities: self-checks and self-monitoring for software and hardware and monitoring network traffic at significant boundaries, and coordination among ISPs to defend against attacks such as DDoS or stop machines from connecting that aren’t fully patched. ISPs currently have no incentive to do this. Techniques for filtering encrypted packets need to be studied.

A more complex issue that filtering raises is where to put the filters: should a nation have a firewall and what values should apply? Schneider cited China as a current example of a firewall, albeit in the service of censorship rather than security. Such isolation would be technically difficult to manage: the risk of balkanizing the internet is the protocols within individual pools may diverge and machines may work in some areas but not others.

Schneider concluded by stressing the importance of metaphors. If cyber attacks are crimes, that implies deterrence through accountability, offering no one incentives and leaving the work to those whose job it is to catch and punish the criminals. If cyber attacks are a disease, you take the approach of public cyber security – raising the level of security in all systems is better than raising it in just a few systems.

However, public cyber security will not work in the area where cyber attacks are warfare. The 1980s cold war maintained balance via the doctrine of “mutually assured destruction”, in which rational players would not attack. In the computer world, however, where it’s possible to destabilise the retaliatory attack, we live in a world of “mutually unassured destruction”, in which the incentive not to launch a pre-emptive attack is gone. This, Schneider said, is a different set of problems that needs a different set of incentives. However, the fundamental problem remains the same: that we are deploying untrustworthy systems.

RISCS HubRISCS Founds Journal of Cyber Security

RISCS has founded the open access  Journal of Cyber Security, to be edited by David Pym and Tyler Moore.   The Journal will publish accessible articles describing original research in the inherently interdisciplinary world of computer, systems, and information security. The journal is premised on the belief that computer science-based approaches, while necessary, are not sufficient to tackle cybersecurity challenges. Instead, scholarly contributions from a range of disciplines are needed to understand the human aspects of cybersecurity. The Journal will provide a hub around which the interdisciplinary cybersecurity community can form, and is committed to providing quality conceptual and empirical research, as well as scholarship, that is grounded in real-world implications and solutions.

RISCS HubAngela Sasse, RISCS Director, Featured in Wired Magazine Article

The RISCS Director, Professor Angela Sasse, was quoted in a recent issue of Wired magazine.  In the magazine’s regular “The Big Question” feature, in an item entitled “How will we fight cybercrime over the next ten years?“, Professor Sasse was quoted as saying that:

“In ten years, we will have security that is largely invisible to legitimate users, and that delivers added value. Today, security gets in the way of people’s activities, and requires too much time and attention. People are tired of mental gymnastics and being interrupted by warnings when they go online. Future services will deliver security and privacy as part of a great customer experience.”

Also quoted in the article were Jamie Saunders, Director of the National Cyber Crime Unit at the UK National Crime Agency; Sébastien Marcel, Head of Biometrics at the Idiap Research Institute in Switzerland; Gadi Aviran, Founder and CEO, SenseCy; Colonel Artur Suzik, Director of the NATO Cooperative Cyber Defence Centre of Excellence; and Kevin Mitnick, former FBI Most-Wanted hacker.

RISCS HubCormac Herley Addresses the UK Cyber Security Research Conference 2014

The talk by Cormac Herley, a principal researcher at Microsoft, focused on scientific self-correction. Given that a key part of RISCS’ mission is to put cyber security on a scientific footing, Herley asked what such a science would look like and what security researchers are doing wrong. The 2010 JASON report commissioned by the US Department of Defense to evaluate these issues was one of a number of attempts to answer these questions. Even without getting into the philosophy of science, Herley believes the field can do better.

As a basic principle, science is self-correcting. Therefore, it’s essential to be able to find and identify mistakes and it’s equally important not to let errors accumulate. In other fields, corrections typically begin either with a new observation or because an experiment has exposed a contradiction. Aristotle, for example, posited that a 10kg weight would fall faster than a 1kg weight because it was heavier. Two thousand years later, Galileo began asking what happens if the two weights are tied together: does the 1kg weight slow the 10kg weight’s fall or does the 10kg weight speed up the 1kg weight? It is uncertain whether Galileo ever dropped cannon balls off the top of the Tower of Pisa to test his hypothesis, but the arrival of taller buildings made the experiment easier, and a new military emphasis on gunpowder and ballistics was creating the need for a more exact science of mechanics.

Another contradiction was spotted in the late 19th century by the physicist Hendrik Lorentz, who noted inconsistencies between Isaac Newton’s laws and James Clark Maxwell’s equations. This problem was eventually solved in 1905, when Albert Einstein published the theory of special relativity. Similarly, when James D. Watson and Francis Crick changed their ideas about the structure of DNA, after seeing Rosalind Franklin’s X-ray photograph.

In all these cases, a piece of new information showed that the original understanding was wrong, and scientists changed course. The question Herley then posed is: why does this not happen in security? There are certainly claims made in security, for example, that people should run antivirus software, and use lengthy, complex passwords to avoid stated outcomes. What happens when consumers ignore the advice, which is most often the case? Is that a scientific contradiction? If not, what would be?

In an August 2014 paper, Herley, Dinei Florencio, and Paul C. van Oorschot studied the standard advice to use a unique random string for every password. In their analysis, attempting to follow this advice while managing 100 such passwords is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility. “No one does this.”

Herley’s resulting question was, “How does it come about that we end up insisting on the necessity of something that is impossible?” Rather than patch this particular instance – technically with password safes, for example – Herley suggested looking at it as evidence of a serious error and trying to locate the problem in the reasoning that leads to it. “How did we end up insisting on the necessity of things that are universally ignored and provably impossible? How do we end up making claims that no observation can contradict?”

For another example, Herley cited the opening line of an article by Fred Schneider that appeared in a 2012 cyber security special issue of the NSA journal The Next Wave: “A secure system must defend against all possible attacks—including those unknown to the defender.” Herley struggled with this first sentence: “Is this a definition or a claim?” If it’s a definition, it says nothing about the world; if it’s a claim then there should be a way of finding these systems and testing that these are indeed a subset of systems that defend against attacks – but there is no way to do this.

Herley went on to analyse these types of errors. “Denying the antecedent”, a durable part of Aristotle’s work, confuses “necessary” and “sufficient”. In formal logic, the fact that x implies y does not mean that not-x implies not-y. An unplugged computer blocked up in concrete and buried may be secure (per Gene Spafford), but that doesn’t mean that only such a computer has that property, just as defending against a particular attack doesn’t mean that you are safe from it and not defending against that attack doesn’t mean you will succumb. So: reusing passwords is a real threat vector that does enable attackers to do bad things, and not reusing passwords will eliminate that particular risk but it still doesn’t follow that you shouldn’t reuse passwords.

Charles P. Pfleeger and Shari Lawrence Pfleeger’s’ principle of easiest penetration and Bruce Schneier’s comment that computer attacks are so frictionless that anything that can happen must be assumed to happen contain a different error: the enormous mathematical difference between zero and non-zero, even if the difference is only a tiny amount.

All these errors together, Herley said, committed without malice or laziness, “have led us off the cliff”. In addition, untestable claims make grand, widely reported statements that can never be proved wrong, such as “No security through obscurity” or “There are two kinds of people: those who have been hacked and those who don’t know they’ve been hacked”.

Herley said, “It cuts you off from self-correction, but you feel you’re on solid ground.”

Ultimately, Herley blames these problems on a failure to document assumptions and exercise care about the claims that are being made. No claim should be exempt from the burden of proof, and security, like other fields, needs to avoid jumping to conclusions and falling prey to confirmation bias. Claims about how what people must do to defend their systems should be measured by the lived experience of different populations.

Herley suggested adhering to the following principles:

  • Look for contradictions.
  • Stop treating slogans (such as “no security through obscurity” or “usability and security are a tradeoff”) as if they were Newton’s laws. If something can’t be proved from first principles do not treat it with more reverence than it deserves.
  • Allow no exemptions from the burden of proof.
  • Stop invoking security exceptionalism to excuse sloppy thinking, confirmation bias, vague claims, and jumping to conclusion.

RISCS HubUK Cyber Security Research Conference 2014 Held by RISCS and RIAPAV

The second annual UK Cyber Security Research Conference reflected the divergent approaches to establishing a science of cyber security of the two institutes whose work it showcased. Major themes included emerging methods for creating and making predictions about secure systems, and preventing errors, both errors of scientific thinking about security and errors of implementation. As wireless connections and programmable electronics spread to all types of objects and systems that were not historically designed with security in mind and that cannot be easily patched and updated later, understanding how to build both software and hardware securely from the beginning will become increasingly important.

The four linked projects that make up RISCS, directed from University College London by Angela Sasse, draw on disparate disciplines such as psychology, human-computer interaction, mathematical modelling, and game theory to create science-based knowledge to help organisations answer the two key questions: How secure is my organisation? and How do we make better security decisions? The second institute, the Research Institute in Automated Program Analysis and Verification (RIAPAV), directed from Imperial College by Philippa Gardner, is a set of six projects that draw on the more technical fields of mathematical logic, programming languages, and program analysis and verification with the goal of creating industrial-scale methods for formally proving the correctness, safety, and security of software.

Both institutes are collaborations among multiple disciplines and universities, and are funded by GCHQ in partnership with EPSRC and BIS. A third institute, to be led by Chris Hankin at Imperial College, is being set up to focus on trustworthy industrial control systems.

In his keynote address, Microsoft researcher Cormac Herley discussed the compounding errors in thinking that lead cyber security as a field to fail at the key distinguishing feature of science, self-correction. To reverse this situation, he said, researchers need to look for contradictions, insist on proof from first principles, and avoid confirmation bias, vague claims, and jumping to conclusions. This all seems simple enough, and yet, as Herley said, standard security advice is sometimes impossible to follow and often poorly supported by logic.

The four RISCS projects, now completing their second year of work, are in part efforts to study such pieces of advice without taking their validity for granted: the goal is to build a science base to help organisations make better decisions.

Games and Abstraction uses game theory to model complex scenarios and build proof-of-concept tools to help system administrators make the best decisions about how best to defend against attacks. This year the group has developed more sophisticated modelling techniques that include both direct and indirect costs, and developed a hybrid approach using game theory together with other optimization techniques such as the knapsack algorithm. The project is now more grounded in the SANS critical controls, and is working on developing defence packages and a recovery game, as well as multi-stage games.

Cyber Security Cartographies (CySeCa), a project to provide modelling and mapping tools to organisations with complex infrastructures to ensure that everything is taken into account when making decisions, including the complexity of interaction , which is often not represented. The project uses social network analysis to map the stakeholders in an organisation and has conducted many in-depth interviews with stakeholders who are often overlooked. This year the project has developed a visual narrative toolkit to allow other researchers or security practitioners to carry out the same type of investigations themselves. On the technical side, the project has mapped and is analysing the technical infrastructure and has developed a clustering algorithm to identify and understand behaviours at the data network layer.

Choice Architectures for Information Security is a collaboration between psychologists and computer scientists translating their understanding of users’ ideas about trust, identity, and security into modelling. The group is testing the idea of “nudging” users to make better decisions and studying the question of when it is appropriate to provide security warnings and has conducted field trials of a prototype advising users which available wifi networks satisfy security requirements. Key achievements this year have been to develop a formal framework for nudge design.

Productive Security seeks to improve both security and productivity simultaneously, based on the idea that security does not have to be designed as obstacles that impede users’ ability to do their jobs. The project works with four large organisations and has collected a lot of data on security behaviour, measuring again after changed controls have been deployed to show that these low-friction mechanisms both lighten the users’ burden and are secure enough. Modelling using the collected data to try to predict the outcome of interventions before deployment. This year, the project developed a new conception of non-compliance after finding that non-compliant users nonetheless take a lot of steps to protect information and systems in the best way they know and think is manageable, and are therefore grateful for better choices. The group has also done some first experiments on using the collected data to perform sentiment analysis using data mining techniques and better understand where and why there are negative connotations around security, as well as embedded security myths and excuses.

The final panel discussed how to close the gap between academic research and the private sector, and between both of those areas and government. Telefónica’s Adrian Gorham, noting that as a result of his company’s participation in the Productive Security project he now understood better where to direct resources, suggested that researchers might usefully occupy some of the space currently taken up by consultants. For that to work, he said, researchers will need to think more commercially and make a business case that can be sold to the CEO, but consultants are expensive and often simply report back what they’re told. “I would prefer good, hungry, smart students.”

Peter Davies, from Thales e-Security, and Alex Ashby, a consultant specialising in SMEs both highlighted attackers’ changing targets. Davies described cases where attackers had been targeting specific individuals with the intention of corrupting the code that secures major banks and other highly sensitive organisations. In SMEs, which often do not believe they are a target, the lack of personnel and resources, said Ashby, means that a sudden change in their circumstances – such as suddenly publicly acquiring a celebrity as a client or firing an insider shareholder – may abruptly raise their level of vulnerability. Both situations cause great stress to both the individuals involved and the company in general. “You can change a network much more rapidly than personalities,” observed Davies.

The larger question the panel considered, however, was how to effect fundamental change: whose job is it? The formal methods on show are based on ideas first proposed decades ago, but will only be adopted by commercial software vendors if they do not slow time to market. Changing the legal regime so that vendors are liable for software defects would arguably make a difference, but the profound change that’s needed requires collaboration across all sectors, based on science, and done in such a way as not to close down the industry to innovation.

RISCS HubNew RISCS Web Site Published

The Research Institute in Science of Cyber Security’s new web site was released on the 23rd May 2014.

The new upgraded site contains information about the Research Institute and the four Cyber Security projects being researched by the six constituent Universities as well as information on publications and events.

RISCS HubStrengthening UK Cyber Security Workshop

On 15 May 2014, Dr Granville Moore presented the Research Institute and its constituent projects to the “Strengthening UK Cyber Security: Working in Partnership to Reduce Risk” workshop event in London.