Professor Fred Schneider of Cornell University gave a keynote address to the UK Cyber Security Research Conference 2015 on 28th October. Schneider outlined joint work with Deirdre Mulligan (UC Berkeley) to create a doctrine of cyber security policy that would serve as a framework of legal principles which could be followed in making regulatory and economic decisions and that create incentives for designing and deploying trustworthy systems.
Schneider began by reviewing how cyber security “really evolved”. Cyber security began with 1960s time-sharing computers, which opened up concerns that one user could access another’s computation. The first idea for a solution was prevention: don’t build in vulnerabilities.
Failings rapidly emerged: there is no clear way to measure security, and this idea is unworkable for large, complex systems and impedes innovation. Even if those problems could be solved, preventing vulnerabilities ignores issues such as social engineering and the dynamic environment; everything would have to be reverified every time anything changed. As a result, the US government stopped investing in program verification, although interest has lately revived because of automated tools and verification may now be part of the solution.
The second idea was risk management – that is, investing in security to reduce expected losses. The problems here are that there’s no reliable way to estimate the probability of an attack and that there is no way to value losses like confidentiality, integrity, recovery, and costs to third parties. In an attack that harvests personal information in order to apply fraudulently for a credit card, the losses are clear-cut. But what are the costs of taking out a piece of the power grid when the components take a year to replace because they’re custom-made just-in-time? Further, the most attractive targets have incentives not to reveal they’ve been attacked. Finally, underinvesting in security turns out to be a rational strategy because individuals don’t reap the full benefit themselves and can’t control vulnerabilities in any case; even a good system ages. In life insurance, the past is a good predictor of the future – but not in software.
By the 2000s, cyber security had moved on to the doctrine of accountability: the idea that every packet should be attributable so that if it’s evil someone can be arrested. Like CCTV, this system enables retrospective attribution and punishment, not real-time security. Attribution is often not possible because of national borders, non-state actors, and the weak binding between machines and individual users, howerver. It’s also incomplete, as not all attacks are crimes (some may be acts of war), and the set of options for privacy is very narrow. This doctrine is still being debated – Schneider cited ID cards as an example – but has not gained much traction.
As a way of moving forward, Schneider and Mulligan began by considering how cyber security works from an economic perspective. In those terms, cyber security is a public good; that is, it’s non-rivalrous; and non-excludable. In public health, which also shares those qualities, even though no one thinks public health will ever be a solved problem the goals are clear: prompt production of health and manage its absence.
Where public health is about people, cyber security is about people plus computers. Its parallel goals, therefore, might be: prompt the production of cyber security; manage the remaining insecurity; and reach political agreement to balance individual rights and public welfare.
Many of the means for following this policy are already with us and in development: formal methods, testing, standards for development, analysis, testing, education, training, and certification. As social pressure has not been sufficient, Managing the remaining insecurity involves several strategies. Schneider and Mulligan propose incentives in the form of liability for producers. Patching needs to be reliable, rapidly deployed, and widespread; no standard requires software producers to provide patches, but they could be penalized for not doing so. Above the level of individual machines, lacking a way to patch critical parts of the infrastructure such as encryption, the entire infrastructure is brittle. Schneider also proposed a requirement for system diversity: even though it’s economically cheaper if everyone uses the same system, diversity can be engineered and would work in some, but not all, settings. In Schneider’s example, earlier versions of Microsoft Windows randomised the address space, a practice that ceased in Windows 7.
Taking further analogies from public health, Schneider and Mulligan suggest some possibilities: self-checks and self-monitoring for software and hardware and monitoring network traffic at significant boundaries, and coordination among ISPs to defend against attacks such as DDoS or stop machines from connecting that aren’t fully patched. ISPs currently have no incentive to do this. Techniques for filtering encrypted packets need to be studied.
A more complex issue that filtering raises is where to put the filters: should a nation have a firewall and what values should apply? Schneider cited China as a current example of a firewall, albeit in the service of censorship rather than security. Such isolation would be technically difficult to manage: the risk of balkanizing the internet is the protocols within individual pools may diverge and machines may work in some areas but not others.
Schneider concluded by stressing the importance of metaphors. If cyber attacks are crimes, that implies deterrence through accountability, offering no one incentives and leaving the work to those whose job it is to catch and punish the criminals. If cyber attacks are a disease, you take the approach of public cyber security – raising the level of security in all systems is better than raising it in just a few systems.
However, public cyber security will not work in the area where cyber attacks are warfare. The 1980s cold war maintained balance via the doctrine of “mutually assured destruction”, in which rational players would not attack. In the computer world, however, where it’s possible to destabilise the retaliatory attack, we live in a world of “mutually unassured destruction”, in which the incentive not to launch a pre-emptive attack is gone. This, Schneider said, is a different set of problems that needs a different set of incentives. However, the fundamental problem remains the same: that we are deploying untrustworthy systems.