News

RISCS HubCormac Herley Addresses the UK Cyber Security Research Conference 2014

The talk by Cormac Herley, a principal researcher at Microsoft, focused on scientific self-correction. Given that a key part of RISCS’ mission is to put cyber security on a scientific footing, Herley asked what such a science would look like and what security researchers are doing wrong. The 2010 JASON report commissioned by the US Department of Defense to evaluate these issues was one of a number of attempts to answer these questions. Even without getting into the philosophy of science, Herley believes the field can do better.

As a basic principle, science is self-correcting. Therefore, it’s essential to be able to find and identify mistakes and it’s equally important not to let errors accumulate. In other fields, corrections typically begin either with a new observation or because an experiment has exposed a contradiction. Aristotle, for example, posited that a 10kg weight would fall faster than a 1kg weight because it was heavier. Two thousand years later, Galileo began asking what happens if the two weights are tied together: does the 1kg weight slow the 10kg weight’s fall or does the 10kg weight speed up the 1kg weight? It is uncertain whether Galileo ever dropped cannon balls off the top of the Tower of Pisa to test his hypothesis, but the arrival of taller buildings made the experiment easier, and a new military emphasis on gunpowder and ballistics was creating the need for a more exact science of mechanics.

Another contradiction was spotted in the late 19th century by the physicist Hendrik Lorentz, who noted inconsistencies between Isaac Newton’s laws and James Clark Maxwell’s equations. This problem was eventually solved in 1905, when Albert Einstein published the theory of special relativity. Similarly, when James D. Watson and Francis Crick changed their ideas about the structure of DNA, after seeing Rosalind Franklin’s X-ray photograph.

In all these cases, a piece of new information showed that the original understanding was wrong, and scientists changed course. The question Herley then posed is: why does this not happen in security? There are certainly claims made in security, for example, that people should run antivirus software, and use lengthy, complex passwords to avoid stated outcomes. What happens when consumers ignore the advice, which is most often the case? Is that a scientific contradiction? If not, what would be?

In an August 2014 paper, Herley, Dinei Florencio, and Paul C. van Oorschot studied the standard advice to use a unique random string for every password. In their analysis, attempting to follow this advice while managing 100 such passwords is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility. “No one does this.”

Herley’s resulting question was, “How does it come about that we end up insisting on the necessity of something that is impossible?” Rather than patch this particular instance – technically with password safes, for example – Herley suggested looking at it as evidence of a serious error and trying to locate the problem in the reasoning that leads to it. “How did we end up insisting on the necessity of things that are universally ignored and provably impossible? How do we end up making claims that no observation can contradict?”

For another example, Herley cited the opening line of an article by Fred Schneider that appeared in a 2012 cyber security special issue of the NSA journal The Next Wave: “A secure system must defend against all possible attacks—including those unknown to the defender.” Herley struggled with this first sentence: “Is this a definition or a claim?” If it’s a definition, it says nothing about the world; if it’s a claim then there should be a way of finding these systems and testing that these are indeed a subset of systems that defend against attacks – but there is no way to do this.

Herley went on to analyse these types of errors. “Denying the antecedent”, a durable part of Aristotle’s work, confuses “necessary” and “sufficient”. In formal logic, the fact that x implies y does not mean that not-x implies not-y. An unplugged computer blocked up in concrete and buried may be secure (per Gene Spafford), but that doesn’t mean that only such a computer has that property, just as defending against a particular attack doesn’t mean that you are safe from it and not defending against that attack doesn’t mean you will succumb. So: reusing passwords is a real threat vector that does enable attackers to do bad things, and not reusing passwords will eliminate that particular risk but it still doesn’t follow that you shouldn’t reuse passwords.

Charles P. Pfleeger and Shari Lawrence Pfleeger’s’ principle of easiest penetration and Bruce Schneier’s comment that computer attacks are so frictionless that anything that can happen must be assumed to happen contain a different error: the enormous mathematical difference between zero and non-zero, even if the difference is only a tiny amount.

All these errors together, Herley said, committed without malice or laziness, “have led us off the cliff”. In addition, untestable claims make grand, widely reported statements that can never be proved wrong, such as “No security through obscurity” or “There are two kinds of people: those who have been hacked and those who don’t know they’ve been hacked”.

Herley said, “It cuts you off from self-correction, but you feel you’re on solid ground.”

Ultimately, Herley blames these problems on a failure to document assumptions and exercise care about the claims that are being made. No claim should be exempt from the burden of proof, and security, like other fields, needs to avoid jumping to conclusions and falling prey to confirmation bias. Claims about how what people must do to defend their systems should be measured by the lived experience of different populations.

Herley suggested adhering to the following principles:

  • Look for contradictions.
  • Stop treating slogans (such as “no security through obscurity” or “usability and security are a tradeoff”) as if they were Newton’s laws. If something can’t be proved from first principles do not treat it with more reverence than it deserves.
  • Allow no exemptions from the burden of proof.
  • Stop invoking security exceptionalism to excuse sloppy thinking, confirmation bias, vague claims, and jumping to conclusion.