Blog

Angela Sasse: Can we make people value IT security?

Angela Sasse

Angela Sasse

As a prelude to this year’s Workshop on Security and Human Behaviour RISCS director Angela Sasse gave the Cambridge Computer Lab’s annual Wheeler lecture, which we summarise here. Ross Anderson live-blogged both the lecture and the workshop, and a recording is available at Bentham’s Gaze.

Sasse began by answering the lecture’s title question by saying: “it’s the wrong question”, adding that what we need is a fundamental shift in how we think about how we do security.

Sasse stumbled serendipitously into information security in the late 1990s, when working with Peter Kirstein, Jon Crowcroft, and Mark Handley on early VOIP and videoconferencing tools. Their telco partner, BT, asked her – saying, “you know something about usability, right?” – to look at a problem: the accountants were demanding an end to the spiralling cost of help desks required to reset passwords. By this time, BT had 100 people in a Scottish call centre doing nothing but resetting passwords for company employees, on top of the normal call centres for customers. Sasse was asked to “do a quick study and find out why these stupid users can’t remember their passwords”, and, with her PhD students, did just that.

This initial study, which resulted in the widely cited paper, Users Are Not the Enemy (PDF), found that the company’s security policies required employees to perform impossible memory tasks. Unless the company wants to pay for all its staff to take a year off to train as memory athletes, nobody can remember 16 to 64 eight-digit uncrackable codes and six-digit PINs that change monthly without writing them down. Most people who read and cite the paper get this point; what is often missed is the closing paragraphs warning that asking users to do something impossible and then shouting at them when they don’t comply leads to the worst possible outcome: users who conclude that security is a joke and a downward spiral that creates a security culture in which the two camps fight each other. This situation helps only attackers.

Twenty years on, there are some basic points everyone ought to know if they’re designing security for a system:

  • Who will use it?
  • What is their main job?
  • What other security mechanisms do they use?

Every security designer should know that complex systems cause mistakes unless constant use leads their users to become very skilled, and that the combination of high-workload security and conflicts with primary tasks lead to non-compliance and shadow practices. Sasse has spent much of the last 20 years helping organisations that call her in saying they know they have non-compliance problems that could land them in trouble with their auditors and regulators, as well as making them vulnerable to attackers. In those engagements, Sasse typically finds that, rather than not caring, users generally try to implement the best security that they think is feasible and manages the risks adequately. So, for example, someone unable to share a file with a colleague because of difficulties with access controls might opt to email it password-protected – a choice most security experts will dismiss with contempt as “laughably insecure”. That user is thinking about the confidentiality of the information, whereas the security person is concerned with preserving version control and audit trails, other benefits users may be unaware of. Better communication is necessary to ensure that users understand security requirements associated with the tasks they carry out.

Some things have improved. Today we have better alternatives for authentication, and there is little need to have myriad complex passwords. The UK’s National Technical Authority has acknowledged that the old rules required impossible memory feats, and published revised guidelines that match the current threat landscape and reduce the burden on users. However, there are many other security measures that continue to drain enormous amounts of time and are complex to use. In some cases, the benefits of using the technologies are so indiscernible that even the most minimal effort doesn’t seem worth it. Before asking people to comply, we should ask if the measures we’re imposing on them are worth their trouble and get rid of the ones that offer little benefit.

One of these bêtes noires is security warnings: a key usability principle holds that these should be saved for exceptional events the designer can’t anticipate. Yet these warnings pop up all the time, and until very recently the associated false positive rates were extremely high. People have very quickly learned to just swat them away – a habit Microsoft’s user interface designers had already embedded. SSL warnings are a particularly bad example: users don’t understand what they mean, don’t know what decision to make, and can only conclude there is nothing they can do except either click OK or give up and go home.

An HTTPS warning: what the user actually sees

A user’s view of an HTTPS warning (by Matthew Smith)

Human factors experts regard false positive rates of over 3% – certainly over 5% – as a problem, as people will stop taking them seriously. A 2013 study of HTTPS administrator mistakes by Devdatta Akhawe and Adrienne Porter-Felt (PDF) found a rate of 15,400 false positive certificate warnings to one true one. At that rate, the mechanism is too dysfunctional to deploy. Nonetheless, a train of work presented at CHI 2015 continues to attempt to force users to pay attention to these warnings by making it harder and less attractive to click OK and by using variations in colours, text, and box size to delay habituation until the user has seen the warning 13 times. Functional MRI studies have shown that without such changes it takes only two viewings.

Nonetheless, acceptance is rising that tools with very high false positive rates are a mistake. Google has been working to reduce the false positive rate, and researchers like Matthew Smith and Sascha Fahl in Germany and Marian Harbach in the US, in their 2013 paper Alice in Warningland (PDF), have found that these inaccurate warnings are caused by implementation errors and have reduced these misconfigurations by correcting erroneous example code in places developers go, such as Github and Stack Overflow. Gradually, the false positive rate is dropping, a better solution for all concerned.

Another of Sasse’s pet peeves – and according to a BBC report something most users hate – is CAPTCHAs, the challenge-response tests to detect spam bots and stop them from signing up for free email accounts, mounting automated password guessing attacks, mining and scraping data, and manipulating online data gathering. The use of CAPTCHAs is fundamentally dishonest in that instead of acknowledging that the service provider has a security problem, it dumps that problem onto all the service’s users by making them prove they’re human. In the physical world, people do not put up with this. When Ryanair wanted to stop screen scraping (so their fares wouldn’t appear on price comparison sites), trying to handle it by adding CAPTCHAs caused the airline’s bookings to drop significantly and online forums such as Tripadvisor saw many complaints. The CAPTCHAs were soon removed. Adding to the nuisance value, the distorted CAPTCHAs are extremely hard for many to read; even the recent improved versions such as Google’s “I’m not a robot” tickbox or the “fun” animations still waste people’s time and no one likes them. It’s notable that many born-digital companies manage without them or use them very sparingly.

Sasse’s argument is that underlying these mechanisms is a form of paternalism that holds that security people are the experts, and people should trust them and do what they say. This has led to the relatively recent trend of incorporating behavioural economics into security. Richard H. Thaler’s and Cass R. Sunstein’s 2008 book, Nudge, is based on this: “choice architects” decide which are the good choices people need to be directed to make. Many studies have shown that setting defaults to require opt-out works – for example in raising organ donation rates in the UK or increasing pension sign-ups. Nudges do provide more compliance, applications in computer security have overlooked the fact that the choices have to be genuinely beneficial, which is often not the case in security. Cue XKCD’s murder car:

XKCD's murder car

XKCD 1837 – “Rental Car”

This year’s CHI saw the beginnings of a resurgence within usability of a movement whose principles could be valuable in security. A workshop on Batya Friedman’s value-sensitive design felt that usability in general has lost its way and ought to return its roots of researching people’s genuine problems and needs and designing technology to support those. The resulting Denver Manifesto formulates this strategy:

It is important for these values to be explicitly and intentionally considered, not just with respect to the values intended but whose values are included, how conflicting values are negotiated, and how values are deployed in practice, especially but not solely when a technology is not fully transparent about how it produces its outputs.[1]

Friedman (University of Washington), the pioneer in this area, believes that security and privacy particularly need this approach Such an adaptation would focus on understanding the security and privacy properties users are looking for rather than imposing on them a paternalistic set of values. In 2002, with Daniel C. Howe, and Ed Felten, she developed a framework for assessing informed consent and used it to redesign the Mozilla browser’s cookie management mechanisms, written up as Informed Consent in the Mozilla Browser (PDF). In 2005, for the paper Informed Consent by Design (PDF), written with Peyina Lin and Jessica K. Miller, took the redesigned Mozilla browser cookie manager (“Cookie Watcher) further, to give people more usable information and just-in-time management tools and examined how users assess whether a website is secure. It set out six principles of meaningful consent:

  • Disclosure: provide accurate information about benefits and harms;
  • Comprehension: the user must understand what is being disclosed;
  • Voluntariness: user can reasonably resist participation;
  • Competence: user has mental, emotional and physical competences to give informed consent;
  • Agreement: clear opportunity to accept or decline;
  • Minimal Distraction: user’s attention should not be diverted from main task.

Sites always claim they get informed consent; however, Sasse argues that these principles, which are also accepted by the OECD, are not only often still not followed but are even trampled upon with impunity. The iTunes agreement, for example, is 52 pages long and requires legal training to understand. Companies effectively make us lie to cover their corporate backs. We know users do not read these documents; some have accepted T&Cs that include giving up one’s immortal soul. There is nothing voluntary about accepting them: agree or don’t use the service – a situation companies exploit to claim that users do not care about privacy. Studies find the opposite and also that even supposedly privacy-apathetic US users feel they’re being treated unfairly. Eventually, users will rebel and either fake and obfuscate their data, flee to alternative platforms, or opt out altogether.

Using encrypted tools is a good strategy. However, the other 1999 founding paper of the field of usable security, Why Johnny Can’t Encrypt (PDF), by Alma Whitten and J.D. Tygar, found that even given good instructions and cognitive walkthroughs only two of 12 participants were able to complete a set of routine encryption tasks using PGP 5.0. However, Whitten’s follow-up to this highly important paper was to create the LIME tutorial, which requires a day and a half to educate users about how public key cryptography works, another example of well-intentioned paternalism. In her 2004 thesis, Making Security Usable (PDF), Whitten wrote:

Looking at the problem of creating usable security from a wider perspective, however, it is clear that there are significant benefits to supporting users in developing a certain base level of generalizable security knowledge. A user who knows that, regardless of what application is in use, one kind of tool protects the privacy of transmissions, a second kind protects the integrity of transmissions, and a third kind protects access to local resources, is much more empowered than one who must start fresh in each new application context.

Whitten herself footnoted users’ responses:

…when presented with a software program incorporating visible public key cryptography, users often complained during the first 10-15 minutes of the testing that they would expect “that kind of thing” to be handled invisibly. As their exposure to the software continued and their understanding of the security mechanisms grew, they generally ceased to make that complaint.

For Sasse, this cheery dismissal represents the same fundamental error of mistaking acceptance for consent. From the users’ point of view: they complained; nothing happened; they were being paid; they went on. But clearly overruling what people want will not spur adoption. As Philip Hallam-Baker aptly put it at the 2006 NIST PKI workshop, “People want to protect themselves, not join a crypto-cult”.

Getting users to adopt this kind of technology is one of the most fundamental challenges we face. Many smart people have worked on developing encrypted chat tools but complain of lack of adoption. UCL colleague Ruba Abu-Salma has found in interviews with 60 chat users that although all had tried to use at least one or two encrypted versions 50 had stopped. Her study (PDF) found three main complaints. First, the tools lacked utility because interviewees’ correspondents didn’t or wouldn’t use them, or users needed group chat support, which wasn’t available. Second, they lacked usability: installation posed problems, key exchange is cumbersome, and decryption can take minutes. If these chat tools were cars they wouldn’t go most of the places you wanted to go, and half the time you’d have to push them. Better results are obtained from securing a popular application like WhatsApp, with billions of users. Finally, Abu-Salma’s study found a third reason for the lack of adoption: among users’ many misconceptions about the risks they faced and the protection offered by the tools is a lack of belief that encryption actually works. They think anyone who writes code can break it at will, and they believe proprietary code must be more secure. A value proposition to users must tackle these misconceptions.

Sandboxing provides another example. While they limit the spread of malware, sandboxes also often prescribe how users should organise their data and reduce app functionality by forcing developers to drop features and plugins. Sasse’s PhD student Steve Dodier-Lazaro has interviewed 13 users over a long period of time and, like Abu-Salma, finds that users began using the technology with good intentions, but over time all gave up and disabled it. Sandboxes interfered too much with utility, and users reject security updates that remove features they actually use. The technically-savviest users – developers – were the first to disable it. Work in progress suggests sandboxing is acceptable if properly implemented; at the moment it’s not worth losing the ability to move data to where it is needed and be able to separate work and personal data or data belonging to different clients. Sasse’s group believes sandboxing can be successfully improved.

However, in security paternalism is often destructive by imposing requirements on users that run counter to what they want. In his Royal Society Clifford Paterson lecture in 2002, Roger Needham raised this problem:

Not only in security is it the case that an ordinary person has a problem and a friendly mathematician solves a neighbouring problem. An example that is of interest here is the electronic book. We have a pretty good idea of the semantics of the paper book. We go and buy it, we can lend it to our spouse or to a friend, we can sell it, we can legitimately copy small bits of it for our own use, and so on.

Needham continued to point out that publishers tasked mathematicians with making sure just that cannot be done with ebooks – even though there were credible proposals – for example, from Ted Nelson, the father of hypertext – for micropayments and a “transcopyright” method of granting permission for reuse. What users needed and wanted has been completely ignored.

Also destructive is the ritual, habitual, and deeply ingrained demonisation of users among security experts. This year, at CyberUK, the NCSC launched the People are the strongest link campaign to end this mindset.

The Denver Manifesto clearly points to essential long-term changes to move us on from here. Computer science students need to be introduced to the concept of values and taught to incorporate them into system design. They should learn to think critically, reflectively, and empathetically. Getting to that point requires engagement between the people using security mechanisms and the people developing them. Today, typically that doesn’t happen.

 

WannaCry is also a good example of a case in which it’s rational for users to ignore the advice and recommendations issued in response. People do want and value trustworthy expert advice – but irrelevant advice, squabbling, and name-calling convince them none of these players are competent or worth attention.

The mind shift Sasse hopes to spark includes engaging with users and being open to the idea that sometimes the best solution to security problems is investing in making apparently unrelated changes. Sasse has seen companies where incidents could have been stopped by changing hiring practices so staff weren’t working 16-hour shifts that left them too tired to notice problems. In a 2014 study on the New York public transit system, Harvey Molotch found that safety would be better enhanced by improving lighting, ventilation, and PA systems to ensure safe evacuation than repeated garbled PA announcements telling users to report packages, which they ignored because doing so would halt the trains. In general, improving overall resilience is more important than defending against specific threats.

Sasse concluded with four recommendations. First and most important, don’t waste people’s time and attention. Second, recognise that much security advice is paternalistic and not based on security that people want. Paternalism often masks incompetence, vested interests, and unwillingness to change. Finally giving up blaming users in favour of supporting them might bring real progress, but it requires a new and broader set of skills and a different mindset and language.

Questions began with a query whether security really matters, as the world hasn’t ended as a result of its not working. Sasse agreed that in some cases, like warnings, the problem has been exaggerated out of all proportion. But in others, the problem is people’s natural inclination to muddle on and make the best of things, and they take all sorts of risks because they want to deliver on their main job. Ultimately, that’s an unfair situation because they will be blamed. If the world hasn’t ended it may be just luck or it may be that people have made things sort-of-work at a terrible price. Organisations often feel they’re being sold things to counter FUD.

The second questioner asked if regulation is the solution to misaligned incentives. Sasse believes that working constructively with different stakeholders and changing their incentives is a way forward, although in particularly egregious situations such as banking policies pressure from consumer organizations may be essential.

Forty years ago, the third questioner heard a talk about an organisation whose myriad data entry errors were reduced by reformatting the 40-digit numbers employees had to type in to include spaces that split them up into memorable chunks. Yet today bank account numbers are still presented as uninterrupted 16-digit strings with no check digits, and users must type them in at their own risk. Sasse agreed that in industrial design, this kind of 50-year-old error would never happen. The maturity of computer design does not match that of older physical systems such as cockpits. In a study with US colleagues, Sasse found that software development is so intensely tribal that even developers trained in security and usability appeared to have forgotten their education after moving into real-world development environments. The older generation sets the standards, and newcomers worry more about fitting in than spreading their new knowledge and skills.

In IoT, the problem appears to be numerous siloed domains. Sasse believes mandatory reviews may be necessary, though these will slow innovation and companies will complain about the overheads.

Research, both Sasse’s own and other people’s, makes clear that many misconceptions must be remedied. Education must be coherent and correct; the UK has at least ten education bodies. As a first step these must reach agreement on a single consistent message. In schools, more is needed to teach kids about risks in the cyber world based on the principle of helping them understand the risks and the potential consequences but then letting them make their own decisions.

In the compiler field, teachers prefer to teach their own research rather than prior knowledge, creating graduates who are ignorant of the field’s history. Sasse noted that UCL is moving to separately appointed teachers for first and second years rather than using researchers to teach these students. Sasse believes security experts should review the sample code given to students and remove the known security problems.

One reason we don’t rent murder cars like XKCD’s is that it’s illegal to offer them. Yet computer security is presented as an individual responsibility when much of the trouble is structural. Is it fair to ask users to be computer scientists, or to ask cryptographers to be warm and fuzzy extraverts? Instead, it may be time for risk-based regulations. For example, Samsung’s IoT hub certificate has a lifetime of 25 years and uses SHA-1. Many other vulnerabilities are the same: a huge, structural problem that’s not about risk reduction but risk shifting that compromises users and then blames them.

A questioner defended IT professionals despite giving her students recommendations similar to Sasse’s: many of these problems are genuinely hard. What is the solution to trying to meet users’ demands for convenience and usability while mitigating their risk? Sasse wants to end the myth of the tradeoff between usability and security; in many cases the problem is failing to get sufficient information to design appropriately for the situation. The myth is an excuse for that failure.

A final questioner asked about the move to third-party authentication provided by Google and Facebook. Sasse is worried by the amount of data being collected by companies that will use it for behavioural analysis and advertising without the user’s awareness. Even though these companies say the users remain anonymous, reidentification is trivial. Sasse suspects they’re using authentication as a way of growing their databases for advertising and marketing purposes.

[1] Sasse adds that an important initiative for values-based design is the IEEE’s P7000 Model Process for Addressing Ethical Concerns During System Design standards effort, led by Sarah Spiekermann-Hoff.

About Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries

2 comments on “Angela Sasse: Can we make people value IT security?

  1. Pingback: Interesting Monday Reads – Adam Shostack & friends

  2. Pingback: Role modelling: women in cyber security | RISCS

Comments are closed.