Role modelling: women in cyber security

Angela Sasse

Angela Sasse

SC Magazine and cyber security writer Kate O’Flaherty’s list, published September 5, of “20 SC Women of Influence in UK Cyber Security 2017” included RISCS director Angela Sasse alongside several other familiar names such as Sadie Creese, director of the Global Cyber Security Capacity Centre at the Oxford Martin School, Information Commissioner Elizabeth Denham, and IISP general manager Amanda Finch. A companion editorial by Naina Bhattacharya notes that only 11% of the cyber security workforce are women. In the US, PWC finds a similar situation: there, the workforce is 14% women.

 

Lynne Coventry

Lynne Coventry

RISCS is, however, demonstrating what a more diverse workforce might look like: many of the institute’s key researchers are women. Lizzie Coles-Kemp (Royal Holloway), the deputy director, spent many years working as a security practitioner before moving into academia. Others associated with RISCS are applying their work from other disciplines to cyber security. These include Lynne Coventry (Northumbria, psychology and usability), Pam Briggs (Northumbria, applied psychology, identity, and trust), Madeline Carr (Cardiff, international relations and internet governance), Monica Whitty, whose work in psychology quickly led her to study online relationships and mass-market scams and lead the Detecting and Preventing Mass-Marketing Fraud project, and Helen Sharp (Open University), who is combining her expertise in software engineering and coding communities to lead the project Motivating Jenny to write secure software.

Helen Sharp

Helen Sharp

Monica Whitty

Monica Whitty

Lizzie Coles-Kemp

Lizzie Coles-Kemp

Madeline Carr

Madeline Carr

 

 

 

 

 

 

This diversity underlies RISCS’ work on projects such as the four linked phase one projects, which led directly to the policy NCSC published earlier this year, People are the Strongest Link and the following three key ideas in rethinking how cyber security is implemented:

  • Users are not the enemy, but a key asset in delivering security;
  • Users’ time is a limited and costly resource, so usable and efficient security is essential;
  • Users’ goals, values, and working practices need to be considered, and security should be designed to fit around them.

 

 

Christian Wagner: Leveraging uncertainty in cyber security

Christian Wagner

Christian Wagner

The key to the plot of Philip K. Dick’s story Minority Report (spoiler alert) is that there were three precognitives, and in cases where two of them agreed sometimes it was critical to look at what the dissenter was saying.

Considering dissenting outliers and managing uncertainty are part of the new EPSRC and NCSC-funded RISCS project Leveraging the Multi-Stakeholder Nature of Cyber Security, led by Christian Wagner from the Lab of Uncertainty in Data and Decision Making (LUCID) at the University of Nottingham. The goal is to establish the foundations for a digital “Online CYber Security System” (OCYSS) that will offer decision support by rapidly bringing together information on system vulnerabilities and alerting organisations that may be affected by them.

For the last five to ten years, Wagner, whose background is in working with uncertain data such as that collected by sensors in robotics or captured explicitly from people, has been working with colleagues from the University of Nottingham on problems of interest to CESG, NCSC’s precursor. “A key aspect they were interested in,” he says, “was that when the vulnerabilities of systems are assessed, usually multiple cyber security experts look at them, some internal, some external. Commonly, they break down these systems by attack vectors and attack paths. Then each path is split into steps – hops. Compromising a system may involve multiple paths and hops, so what they were interested in initially was – how do we capture the unavoidable uncertainty in these assessments and what are the patterns in the differences between individual experts’ assessments? ”

In practice, experts in vulnerability assessment are not asked to fix the systems but to highlight potential weaknesses. The fundamental questions, therefore, are first, how confident we can be in the assessments they are providing, and second, how we should deal with the variability between multiple people assessing the same system? If their assessments are different, how do we know which assessment we should follow? The reports made by Philip K. Dick’s three “precogs” were simply studied for majority agreement – which turned out to be the problem. That’s not as valid an approach in cyber security, where often one individual may hold key knowledge, or experts from different areas may be aware of different key aspects. A final key question: what is the minimum – or ideal – number of expert opinions needed to give you confidence that you’ve got the right assessment?

At first glance, the prior work Wagner cites as relevant appears to have little to do with security. The 2015 paper Eliciting Values for Conservation Planning and Decisions: A Global Issue was published in the Journal of Environmental Management. Yet a closer look finds the connection: he and co-authors Ken J. Wallace and Michael J. Smith discussed the difficulty of decision-making about the conservation and use of natural resources in the face of competing human values and, particularly, how to incorporate the value of human well-being into otherwise technical deliberations. Especially relevant was the way stakeholders were asked to rate the importance of each of the values they were considering.

Few of us can have escaped the five-point scales so often used in such surveys, the ones that ask you to locate your reaction along a given range – for example, from strongly agree to strongly disagree, or from very important to very unimportant. These Likert scales are the most common tool in this type of survey, and modern pollsters love them because they’re easily turned into numbers representing our reactions that can be simply compared and quantitatively manipulated. Yet even in the limited context of rating your satisfaction with a hotel room the approach is frustratingly constraining: was that bed a 3 or a 4? Is there a way to indicate that the mattress was great but the bed’s positioning right under the air conditioning unit doomed its occupant to discomfort? In a more complex realm, like conservation, where the values are fuzzier – human physical health, or the spiritual value of the outdoors – the Likert scale approach throws away the very real uncertainty people feel in marking these assessments and their observable hesitation in picking a single number.

“If you watch people completing these,” Wagner says, “what usually happens is they display hesitancy between points.” He and his co-authors set out to see if they could get something more out of these quantitative techniques. Their solution, which they have since applied to NHS quality of life assessments, asks subjects to instead draw an ellipse around a point on a continuous scale – for example, 1 to 10. The position of ellipse indicates the response; its width indicates the scale of the uncertainty the subject associates with their answer. “It’s intuitive to people to doodle,” Wagner says. In assessing the results, the researchers measure the uncertainty surrounding the response as well as the response itself.

Wagner's diagram showing how to capture uncertainty

Capturing uncertainty via a modified Likert scale

The result is a level of quantitative information that goes beyond standard questionnaire approaches, coming a very small step closer to powerful qualitative techniques such as interviews, which can’t be used for this type of work because they cannot be scaled up for the continuous collection of large samples. Wagner’s process captures more information, and rapidly; people actually find this method of answering survey questions faster than using a Likert scale. Research partner Carnegie-Mellon is involved in this aspect: what is it people like, and how well does the technique capture the uncertainty in people’s minds as opposed to just showing that people like circling things? These captured intervals also offer greater opportunities for algorithmic analysis and modelling.

The statistical results from initial work suggest that asking five experts to assess a system provides a good level of coverage that can inspire confidence.

Beyond this, a crucial question addressed by the LUCID team when considering attack vectors and their component stages – or hops – is: if you know the vulnerability of each individual hop can you infer the vulnerability of the overall vector? This isn’t as straightforward as it might seem because of uncertainty and dependencies between hops: hops A and B may be fine in isolation, but together they may be a problem.

In early work, the LUCID team collected vulnerability assessments of attack vectors and associated hops for a series of scenarios, proceeding to show how the individual hop vulnerabilities can be exploited to reproduce the overall vulnerability of a vector. “It’s neat that we could show how a well-known mathematical operator – an ordered weighted average – can replicate the judgement of experts when combining the vulnerability of hops into an overall vulnerability assessment for a vector,” he says. “In practice, it means that experts expect the hop with the highest vulnerability to have the largest influence over the vulnerability of the overall vector – which intuitively makes sense.”

One of the practical uses of this work is a really important issue facing the NCSC (formerly CESG): there simply aren’t enough expert assessors to look at all the systems we have. The country’s thousands of systems may each have different – and changing – attack paths, but all these paths are made up of hops and the hops may be shared across many systems. In an ideal world, one could reduce the required effort by looking at only the constantly changing hop vulnerabilities and aggregating them to tell companies what their highest-risk paths of attack are.

“It’s not a panacea,” Wagner warns. “We’re not solving cyber security.” However, “The system specifically addresses how we deal with not having enough experts to look at all these vectors and how we might minimise the time it takes for new knowledge on vulnerabilities to be available to relevant system users.” Eventually, the idea behind OCYSS is to have an online platform that will make it possible for experts and other stakeholders such as software providers to combine their insights rapidly and efficiently .

What is crucial is retaining uncertainty: “In a lot of areas of science there is an expectation that there is a crisp truth,” Wagner says. In this case, the fundamental point is that there may be no one crisp answer, just as you can’t supply a single definitive number to answer the question, “How safe is running?” Therefore, it’s essential to preserve the uncertainty of those inputs all the way through the process. Similarly, it’s important not to ignore outliers: even if four out of your five experts choose one thing and just one chooses another, it’s important to spend a little time looking at that minority report.

Angela Sasse: Can we make people value IT security?

Angela Sasse

Angela Sasse

As a prelude to this year’s Workshop on Security and Human Behaviour RISCS director Angela Sasse gave the Cambridge Computer Lab’s annual Wheeler lecture, which we summarise here. Ross Anderson live-blogged both the lecture and the workshop, and a recording is available at Bentham’s Gaze.

Sasse began by answering the lecture’s title question by saying: “it’s the wrong question”, adding that what we need is a fundamental shift in how we think about how we do security.

Sasse stumbled serendipitously into information security in the late 1990s, when working with Peter Kirstein, Jon Crowcroft, and Mark Handley on early VOIP and videoconferencing tools. Their telco partner, BT, asked her – saying, “you know something about usability, right?” – to look at a problem: the accountants were demanding an end to the spiralling cost of help desks required to reset passwords. By this time, BT had 100 people in a Scottish call centre doing nothing but resetting passwords for company employees, on top of the normal call centres for customers. Sasse was asked to “do a quick study and find out why these stupid users can’t remember their passwords”, and, with her PhD students, did just that.

This initial study, which resulted in the widely cited paper, Users Are Not the Enemy (PDF), found that the company’s security policies required employees to perform impossible memory tasks. Unless the company wants to pay for all its staff to take a year off to train as memory athletes, nobody can remember 16 to 64 eight-digit uncrackable codes and six-digit PINs that change monthly without writing them down. Most people who read and cite the paper get this point; what is often missed is the closing paragraphs warning that asking users to do something impossible and then shouting at them when they don’t comply leads to the worst possible outcome: users who conclude that security is a joke and a downward spiral that creates a security culture in which the two camps fight each other. This situation helps only attackers.

Twenty years on, there are some basic points everyone ought to know if they’re designing security for a system:

  • Who will use it?
  • What is their main job?
  • What other security mechanisms do they use?

Every security designer should know that complex systems cause mistakes unless constant use leads their users to become very skilled, and that the combination of high-workload security and conflicts with primary tasks lead to non-compliance and shadow practices. Sasse has spent much of the last 20 years helping organisations that call her in saying they know they have non-compliance problems that could land them in trouble with their auditors and regulators, as well as making them vulnerable to attackers. In those engagements, Sasse typically finds that, rather than not caring, users generally try to implement the best security that they think is feasible and manages the risks adequately. So, for example, someone unable to share a file with a colleague because of difficulties with access controls might opt to email it password-protected – a choice most security experts will dismiss with contempt as “laughably insecure”. That user is thinking about the confidentiality of the information, whereas the security person is concerned with preserving version control and audit trails, other benefits users may be unaware of. Better communication is necessary to ensure that users understand security requirements associated with the tasks they carry out.

Some things have improved. Today we have better alternatives for authentication, and there is little need to have myriad complex passwords. The UK’s National Technical Authority has acknowledged that the old rules required impossible memory feats, and published revised guidelines that match the current threat landscape and reduce the burden on users. However, there are many other security measures that continue to drain enormous amounts of time and are complex to use. In some cases, the benefits of using the technologies are so indiscernible that even the most minimal effort doesn’t seem worth it. Before asking people to comply, we should ask if the measures we’re imposing on them are worth their trouble and get rid of the ones that offer little benefit.

One of these bêtes noires is security warnings: a key usability principle holds that these should be saved for exceptional events the designer can’t anticipate. Yet these warnings pop up all the time, and until very recently the associated false positive rates were extremely high. People have very quickly learned to just swat them away – a habit Microsoft’s user interface designers had already embedded. SSL warnings are a particularly bad example: users don’t understand what they mean, don’t know what decision to make, and can only conclude there is nothing they can do except either click OK or give up and go home.

An HTTPS warning: what the user actually sees

A user’s view of an HTTPS warning (by Matthew Smith)

Human factors experts regard false positive rates of over 3% – certainly over 5% – as a problem, as people will stop taking them seriously. A 2013 study of HTTPS administrator mistakes by Devdatta Akhawe and Adrienne Porter-Felt (PDF) found a rate of 15,400 false positive certificate warnings to one true one. At that rate, the mechanism is too dysfunctional to deploy. Nonetheless, a train of work presented at CHI 2015 continues to attempt to force users to pay attention to these warnings by making it harder and less attractive to click OK and by using variations in colours, text, and box size to delay habituation until the user has seen the warning 13 times. Functional MRI studies have shown that without such changes it takes only two viewings.

Nonetheless, acceptance is rising that tools with very high false positive rates are a mistake. Google has been working to reduce the false positive rate, and researchers like Matthew Smith and Sascha Fahl in Germany and Marian Harbach in the US, in their 2013 paper Alice in Warningland (PDF), have found that these inaccurate warnings are caused by implementation errors and have reduced these misconfigurations by correcting erroneous example code in places developers go, such as Github and Stack Overflow. Gradually, the false positive rate is dropping, a better solution for all concerned.

Another of Sasse’s pet peeves – and according to a BBC report something most users hate – is CAPTCHAs, the challenge-response tests to detect spam bots and stop them from signing up for free email accounts, mounting automated password guessing attacks, mining and scraping data, and manipulating online data gathering. The use of CAPTCHAs is fundamentally dishonest in that instead of acknowledging that the service provider has a security problem, it dumps that problem onto all the service’s users by making them prove they’re human. In the physical world, people do not put up with this. When Ryanair wanted to stop screen scraping (so their fares wouldn’t appear on price comparison sites), trying to handle it by adding CAPTCHAs caused the airline’s bookings to drop significantly and online forums such as Tripadvisor saw many complaints. The CAPTCHAs were soon removed. Adding to the nuisance value, the distorted CAPTCHAs are extremely hard for many to read; even the recent improved versions such as Google’s “I’m not a robot” tickbox or the “fun” animations still waste people’s time and no one likes them. It’s notable that many born-digital companies manage without them or use them very sparingly.

Sasse’s argument is that underlying these mechanisms is a form of paternalism that holds that security people are the experts, and people should trust them and do what they say. This has led to the relatively recent trend of incorporating behavioural economics into security. Richard H. Thaler’s and Cass R. Sunstein’s 2008 book, Nudge, is based on this: “choice architects” decide which are the good choices people need to be directed to make. Many studies have shown that setting defaults to require opt-out works – for example in raising organ donation rates in the UK or increasing pension sign-ups. Nudges do provide more compliance, applications in computer security have overlooked the fact that the choices have to be genuinely beneficial, which is often not the case in security. Cue XKCD’s murder car:

XKCD's murder car

XKCD 1837 – “Rental Car”

This year’s CHI saw the beginnings of a resurgence within usability of a movement whose principles could be valuable in security. A workshop on Batya Friedman’s value-sensitive design felt that usability in general has lost its way and ought to return its roots of researching people’s genuine problems and needs and designing technology to support those. The resulting Denver Manifesto formulates this strategy:

It is important for these values to be explicitly and intentionally considered, not just with respect to the values intended but whose values are included, how conflicting values are negotiated, and how values are deployed in practice, especially but not solely when a technology is not fully transparent about how it produces its outputs.[1]

Friedman (University of Washington), the pioneer in this area, believes that security and privacy particularly need this approach Such an adaptation would focus on understanding the security and privacy properties users are looking for rather than imposing on them a paternalistic set of values. In 2002, with Daniel C. Howe, and Ed Felten, she developed a framework for assessing informed consent and used it to redesign the Mozilla browser’s cookie management mechanisms, written up as Informed Consent in the Mozilla Browser (PDF). In 2005, for the paper Informed Consent by Design (PDF), written with Peyina Lin and Jessica K. Miller, took the redesigned Mozilla browser cookie manager (“Cookie Watcher) further, to give people more usable information and just-in-time management tools and examined how users assess whether a website is secure. It set out six principles of meaningful consent:

  • Disclosure: provide accurate information about benefits and harms;
  • Comprehension: the user must understand what is being disclosed;
  • Voluntariness: user can reasonably resist participation;
  • Competence: user has mental, emotional and physical competences to give informed consent;
  • Agreement: clear opportunity to accept or decline;
  • Minimal Distraction: user’s attention should not be diverted from main task.

Sites always claim they get informed consent; however, Sasse argues that these principles, which are also accepted by the OECD, are not only often still not followed but are even trampled upon with impunity. The iTunes agreement, for example, is 52 pages long and requires legal training to understand. Companies effectively make us lie to cover their corporate backs. We know users do not read these documents; some have accepted T&Cs that include giving up one’s immortal soul. There is nothing voluntary about accepting them: agree or don’t use the service – a situation companies exploit to claim that users do not care about privacy. Studies find the opposite and also that even supposedly privacy-apathetic US users feel they’re being treated unfairly. Eventually, users will rebel and either fake and obfuscate their data, flee to alternative platforms, or opt out altogether.

Using encrypted tools is a good strategy. However, the other 1999 founding paper of the field of usable security, Why Johnny Can’t Encrypt (PDF), by Alma Whitten and J.D. Tygar, found that even given good instructions and cognitive walkthroughs only two of 12 participants were able to complete a set of routine encryption tasks using PGP 5.0. However, Whitten’s follow-up to this highly important paper was to create the LIME tutorial, which requires a day and a half to educate users about how public key cryptography works, another example of well-intentioned paternalism. In her 2004 thesis, Making Security Usable (PDF), Whitten wrote:

Looking at the problem of creating usable security from a wider perspective, however, it is clear that there are significant benefits to supporting users in developing a certain base level of generalizable security knowledge. A user who knows that, regardless of what application is in use, one kind of tool protects the privacy of transmissions, a second kind protects the integrity of transmissions, and a third kind protects access to local resources, is much more empowered than one who must start fresh in each new application context.

Whitten herself footnoted users’ responses:

…when presented with a software program incorporating visible public key cryptography, users often complained during the first 10-15 minutes of the testing that they would expect “that kind of thing” to be handled invisibly. As their exposure to the software continued and their understanding of the security mechanisms grew, they generally ceased to make that complaint.

For Sasse, this cheery dismissal represents the same fundamental error of mistaking acceptance for consent. From the users’ point of view: they complained; nothing happened; they were being paid; they went on. But clearly overruling what people want will not spur adoption. As Philip Hallam-Baker aptly put it at the 2006 NIST PKI workshop, “People want to protect themselves, not join a crypto-cult”.

Getting users to adopt this kind of technology is one of the most fundamental challenges we face. Many smart people have worked on developing encrypted chat tools but complain of lack of adoption. UCL colleague Ruba Abu-Salma has found in interviews with 60 chat users that although all had tried to use at least one or two encrypted versions 50 had stopped. Her study (PDF) found three main complaints. First, the tools lacked utility because interviewees’ correspondents didn’t or wouldn’t use them, or users needed group chat support, which wasn’t available. Second, they lacked usability: installation posed problems, key exchange is cumbersome, and decryption can take minutes. If these chat tools were cars they wouldn’t go most of the places you wanted to go, and half the time you’d have to push them. Better results are obtained from securing a popular application like WhatsApp, with billions of users. Finally, Abu-Salma’s study found a third reason for the lack of adoption: among users’ many misconceptions about the risks they faced and the protection offered by the tools is a lack of belief that encryption actually works. They think anyone who writes code can break it at will, and they believe proprietary code must be more secure. A value proposition to users must tackle these misconceptions.

Sandboxing provides another example. While they limit the spread of malware, sandboxes also often prescribe how users should organise their data and reduce app functionality by forcing developers to drop features and plugins. Sasse’s PhD student Steve Dodier-Lazaro has interviewed 13 users over a long period of time and, like Abu-Salma, finds that users began using the technology with good intentions, but over time all gave up and disabled it. Sandboxes interfered too much with utility, and users reject security updates that remove features they actually use. The technically-savviest users – developers – were the first to disable it. Work in progress suggests sandboxing is acceptable if properly implemented; at the moment it’s not worth losing the ability to move data to where it is needed and be able to separate work and personal data or data belonging to different clients. Sasse’s group believes sandboxing can be successfully improved.

However, in security paternalism is often destructive by imposing requirements on users that run counter to what they want. In his Royal Society Clifford Paterson lecture in 2002, Roger Needham raised this problem:

Not only in security is it the case that an ordinary person has a problem and a friendly mathematician solves a neighbouring problem. An example that is of interest here is the electronic book. We have a pretty good idea of the semantics of the paper book. We go and buy it, we can lend it to our spouse or to a friend, we can sell it, we can legitimately copy small bits of it for our own use, and so on.

Needham continued to point out that publishers tasked mathematicians with making sure just that cannot be done with ebooks – even though there were credible proposals – for example, from Ted Nelson, the father of hypertext – for micropayments and a “transcopyright” method of granting permission for reuse. What users needed and wanted has been completely ignored.

Also destructive is the ritual, habitual, and deeply ingrained demonisation of users among security experts. This year, at CyberUK, the NCSC launched the People are the strongest link campaign to end this mindset.

The Denver Manifesto clearly points to essential long-term changes to move us on from here. Computer science students need to be introduced to the concept of values and taught to incorporate them into system design. They should learn to think critically, reflectively, and empathetically. Getting to that point requires engagement between the people using security mechanisms and the people developing them. Today, typically that doesn’t happen.

 

WannaCry is also a good example of a case in which it’s rational for users to ignore the advice and recommendations issued in response. People do want and value trustworthy expert advice – but irrelevant advice, squabbling, and name-calling convince them none of these players are competent or worth attention.

The mind shift Sasse hopes to spark includes engaging with users and being open to the idea that sometimes the best solution to security problems is investing in making apparently unrelated changes. Sasse has seen companies where incidents could have been stopped by changing hiring practices so staff weren’t working 16-hour shifts that left them too tired to notice problems. In a 2014 study on the New York public transit system, Harvey Molotch found that safety would be better enhanced by improving lighting, ventilation, and PA systems to ensure safe evacuation than repeated garbled PA announcements telling users to report packages, which they ignored because doing so would halt the trains. In general, improving overall resilience is more important than defending against specific threats.

Sasse concluded with four recommendations. First and most important, don’t waste people’s time and attention. Second, recognise that much security advice is paternalistic and not based on security that people want. Paternalism often masks incompetence, vested interests, and unwillingness to change. Finally giving up blaming users in favour of supporting them might bring real progress, but it requires a new and broader set of skills and a different mindset and language.

Questions began with a query whether security really matters, as the world hasn’t ended as a result of its not working. Sasse agreed that in some cases, like warnings, the problem has been exaggerated out of all proportion. But in others, the problem is people’s natural inclination to muddle on and make the best of things, and they take all sorts of risks because they want to deliver on their main job. Ultimately, that’s an unfair situation because they will be blamed. If the world hasn’t ended it may be just luck or it may be that people have made things sort-of-work at a terrible price. Organisations often feel they’re being sold things to counter FUD.

The second questioner asked if regulation is the solution to misaligned incentives. Sasse believes that working constructively with different stakeholders and changing their incentives is a way forward, although in particularly egregious situations such as banking policies pressure from consumer organizations may be essential.

Forty years ago, the third questioner heard a talk about an organisation whose myriad data entry errors were reduced by reformatting the 40-digit numbers employees had to type in to include spaces that split them up into memorable chunks. Yet today bank account numbers are still presented as uninterrupted 16-digit strings with no check digits, and users must type them in at their own risk. Sasse agreed that in industrial design, this kind of 50-year-old error would never happen. The maturity of computer design does not match that of older physical systems such as cockpits. In a study with US colleagues, Sasse found that software development is so intensely tribal that even developers trained in security and usability appeared to have forgotten their education after moving into real-world development environments. The older generation sets the standards, and newcomers worry more about fitting in than spreading their new knowledge and skills.

In IoT, the problem appears to be numerous siloed domains. Sasse believes mandatory reviews may be necessary, though these will slow innovation and companies will complain about the overheads.

Research, both Sasse’s own and other people’s, makes clear that many misconceptions must be remedied. Education must be coherent and correct; the UK has at least ten education bodies. As a first step these must reach agreement on a single consistent message. In schools, more is needed to teach kids about risks in the cyber world based on the principle of helping them understand the risks and the potential consequences but then letting them make their own decisions.

In the compiler field, teachers prefer to teach their own research rather than prior knowledge, creating graduates who are ignorant of the field’s history. Sasse noted that UCL is moving to separately appointed teachers for first and second years rather than using researchers to teach these students. Sasse believes security experts should review the sample code given to students and remove the known security problems.

One reason we don’t rent murder cars like XKCD’s is that it’s illegal to offer them. Yet computer security is presented as an individual responsibility when much of the trouble is structural. Is it fair to ask users to be computer scientists, or to ask cryptographers to be warm and fuzzy extraverts? Instead, it may be time for risk-based regulations. For example, Samsung’s IoT hub certificate has a lifetime of 25 years and uses SHA-1. Many other vulnerabilities are the same: a huge, structural problem that’s not about risk reduction but risk shifting that compromises users and then blames them.

A questioner defended IT professionals despite giving her students recommendations similar to Sasse’s: many of these problems are genuinely hard. What is the solution to trying to meet users’ demands for convenience and usability while mitigating their risk? Sasse wants to end the myth of the tradeoff between usability and security; in many cases the problem is failing to get sufficient information to design appropriately for the situation. The myth is an excuse for that failure.

A final questioner asked about the move to third-party authentication provided by Google and Facebook. Sasse is worried by the amount of data being collected by companies that will use it for behavioural analysis and advertising without the user’s awareness. Even though these companies say the users remain anonymous, reidentification is trivial. Sasse suspects they’re using authentication as a way of growing their databases for advertising and marketing purposes.

[1] Sasse adds that an important initiative for values-based design is the IEEE’s P7000 Model Process for Addressing Ethical Concerns During System Design standards effort, led by Sarah Spiekermann-Hoff.

The end of the billion-user Password:Impossible

Password strength cartoon from XKCD

XKCD: “Password Strength”

This week, the Wall Street Journal published an article by Robert McMillan containing an apology from Bill Burr, a man whose name is unknown to most but whose work has caused daily frustration and wasted time for probably hundreds of millions of people for nearly 15 years. Burr is the author of the 2003 Special Publication 800-63. Appendix A from the US National Institute of Standards and Technology: eight pages that advised security administrators to require complex passwords including special characters, capital letters, and numbers, and dictate that they should be frequently changed.

“Much of what I did I now regret,” Burr told the Journal. In June, when NIST issued a completely rewritten document, it largely followed the same lines as the NCSCs password guidance, published in 2015 and based on prior research and collaboration with the UK Research Institute in Science of Cyber Security (RISCS), led from UCL by Professor Angela Sasse. Yet even in 2003 there was evidence that Burr’s approach was the wrong one: in 1999, Sasse did the first work pointing out the user-unfriendliness of standard password policies in the paper Users Are Not the Enemy, written with Anne Adams.

How much did that error cost in lost productivity and user frustration? Why did it take the security industry and research community 15 years to listen to users and admit that the password policies they were pushing were not only wrong but actively harmful, inflicting pain on millions of users and costing organisations huge sums in lost productivity and administration? How many other badly designed security measures are still out there, the cyber equivalent of traffic congestion and causing the same scale of damage?

For decades, every password breach has led to the same response, which Einstein would readily have recognised as insanity: ridiculing users for using weak passwords, creating policies that were even more difficult to follow, and calling users “stupid” for devising coping strategies to manage the burden. As Sasse, Brostoff, and Weirich wrote in 2001 in their paper Transforming the ‘Weakest Link’, “…simply blaming users will not lead to more effective security systems”. In his 2009 paper So Long, and No Thanks for the Externalities (PDF), Cormac Herley (Microsoft Research) pointed out that it’s often quite rational for users to reject security advice that ignores the indirect costs of the effort required to implement it: “It makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain,” he wrote.

When GCHQ introduced the new password guidance, NCSC head Ciaran Martin noted the cognitive impossibility of following older policies, which he compared to trying to memorise a new 600-digit number every month. Part of the basis for Martin’s comments is found in more of Herley’s research. In Password Portfolios and the Finite-Effort User, Herley, Dinei Florencio, and Paul C. van Oorschot found that the cognitive load of managing 100 passwords while following the standard advice to use a unique random string for every password is equivalent to memorising 1,361 places of pi or the ordering of 17 packs of cards – a cognitive impossibility. “No one does this”, Herley said in presenting his research at a RISCS meeting in 2014.

The first of the three questions we started with may be the easiest to answer. Sasse’s research has found that in numerous organisations each staff member may spend as much as 30 minutes a day on entering, creating, and recovering passwords, all of it lost productivity. The US company Imprivata claims its system can save clinicians up to 45 minutes per day just in authentication; in that use case, the wasted time represents not just lost profit but potentially lost lives.

Add the cost of disruption. In a 2014 NIST diary study (PDF), Sasse, with Michelle Steves, Dana Chisnell, Kat Krol, Mary Theofanos, and Hannah Wald, found that up to 40% of the time leading up to the “friction point” – that is, the interruption for authentication – is spent redoing the primary task before users can find their place and resume work. The study’s participants recorded on average 23 authentication events over the 24-hour period covered by the study, and in interviews they indicated their frustration with the number, frequency, and cognitive load of these tasks, which the study’s authors dubbed “authentication fatigue”. Dana Chisnell has summarised this study in a video clip.

The NIST study identified a more subtle, hidden opportunity cost of this disruption: staff reorganise their primary tasks to minimise exposure to authentication, typically by batching the tasks that require it. This is a similar strategy to deciding to confine dealing with phone calls to certain times of day, and it has similar consequences. While it optimises that particular staff member’s time, it delays any dependent business process that is designed in the expectation of a continuous flow from primary tasks. Batching delays result not only in extra costs, but may lose customers, since slow responses may cause them to go elsewhere. In addition, staff reported not pursuing ideas for improvement or innovation because they couldn’t face the necessary discussions with security staff.

Unworkable security induces staff to circumvent it and make errors – which in turn lead to breaches, which have their own financial and reputational costs. Less obvious is the cost of lost staff goodwill for organisations that rely on free overtime – such as US government departments and agencies. The NIST study showed that this goodwill is dropping: staff log in less frequently from home, and some had even returned their agency-approved laptops and were refusing to log in from home or while travelling.

It could all have been so different as the web grew up over the last 20 years or so, because the problems and costs of password policies are not new or newly discovered. Sasse’s original 1999 research study was not requested by security administrators but by BT’s accountants, who balked when the help desk costs of password problems were tripling every year with no end in sight. Yet security people have continued to insist that users must adapt to their requirements instead of the other way around, even when the basis for their ideas is shown to be long out of date. For example, in a 2006 blog posting Purdue University professor Gene Spafford explained that the “best practice” (which he calls “infosec folk wisdom”) of regular password changes came from non-networked military mainframes in the 1970s – a far cry from today’s conditions.

Herley lists numerous other security technologies that are as much of a plague as old-style password practices: certificate error warnings, all of which are false positives; security warnings generally; and ambiguous and non-actionable advice, such as advising users not to click on “suspicious” links or attachments or “never” reusing passwords across accounts.

All of these are either not actionable, or just too difficult to put into practice, and the struggle to eliminate them has yet to bear fruit. Must this same story continue for another 20 years?

cSALSA: The meaning of “cyber security”

Adam Joinson

Adam Joinson

Part of Adam Joinson‘s work focuses on what “cyber security” actually means to both lay people and experts. A professor of information systems at the University of Bath, Joinson’s newest project is cSALSA: Cyber Security Across the Life Span. Launched in April 2017, the three-year project has a long list of partners, primarily behavioural and cognitive psychologists, plus one computer scientist. Among the project’s partners are Pam Briggs (Northumbria University); Debi Ashenden (University of Portsmouth and the Centre for Research and Evidence on Security Threats); Darren Lawrence (Cranfield University); and researchers at Pacific Northwestern Labs, Carleton University, BAe Systems, and others.

The goal of the project is to take a lifespan approach to understanding how cyber security is understood and how that relates to risk and behaviour. There are many reasons for pursuing this approach. First, prior work supports the idea that there are unique security challenges at different life stages. Briggs’s early work suggests a U-shaped curve of vulnerability, with the oldest and youngest are most vulnerable to particular types of threats. Many other changes also occur during a lifetime: the resources people have to draw on change as family, friends, work colleagues, and the power structures within these relationships shift over time. Power systems in particular can be quite important; the 21st century has seen the rise of the teen guru who knows the passwords for the family router. In addition, goals change throughout life as people aspire to and then achieve independence, stability, family, security. These changing states also play a part in determining how individuals interact with technology products.

So, the cSALSA project seeks to study questions such as how these factors intertwine and interact and determine individuals’ responses. What protective steps do they take to understand risk? How do individuals deal with large-scale social and technological change? Age is not the only factor; cohort is also significant in determining an individual’s social networks, families, cognitive ability, technical understanding, and skills. Individuals also vary according to the vulnerabilities that are available for attackers to exploit.

The model the researchers are developing to be shared among all the partners draws on approaches used for diseases to express individuals’ varying levels of exposure, which help to determine how they respond: whether they avoid thinking about it, seek as much information as they can find about it, or adapt to the changing situation. Each of these responses leads to a different outcome.

There are three main strands the project seeks to pull together over the course of its three years. One, define cyber security in everyday language; two, develop the results of year one into a dictionary for testing how different groups of people talk about cyber security; and three, create metrics from a series of interactions to study how to measure risk in cyber security tools, using the understanding gained from the first two years.

Currently, the researchers are working on definitions. Classical definitions pose the problem of having sharp boundaries. They define elements that are necessary and sufficient; then everything that has those elements fits in the definition and everything lacks one or more of those elements is excluded.

But “cyber security” may include myriad vastly different phenomena: hacktivism, cyber crime, cyber terrorism, and cyber warfare all fit within that one term. In addition, risk, by its nature, is fuzzy: we speak of degrees of risk, just as we speak of degrees of security or protection. More fuzzy definitions and, especially, boundaries are needed to capture this. Cognitive psychologists have prototyped approaches that attempt to capture the degree by which something is or is not included. In this approach, exemplars are found for a superordinate category, some of which may be better than others – we might see a robin as a better exemplar of the superordinate “bird” than a penguin. For cyber security, exemplars might be information protection, with an opposing example of identity fraud or loss of bank card details.

Among the possible applications of this work are contributions to theory creating links between security and privacy; the development of a dictionary that can be used to analyse discussions; improvements to the design of awareness and training materials; improvements to the design of security products and features; and the development of workplace metrics and measures.

Readers who would like to help are invited to complete the survey. Both experts and laypeople are welcome to participate.

In answer to questions, Joinson noted that a reason for seeking partners in the US and Canada was to capture some of the fundamental cultural differences; the model does take into account the fact that cohorts differ. Social changes such as working longer may also have their effects on individuals and change the microsystems they rely on.

Visualising access control policies

 

Charles Morisset

Charles Morisset

Charles Morisset‘s talk at the June 2017 RISCS meeting reported on his work with David Sanchez, a recent MSc graduate from Newcastle University, on visualising access policies to help people make better decisions. Funded by a small NCSC grant, the project finished in January 2017.

A common problem among security practitioners is maintaining access control policies when they have hundreds of rules, may be misconfigured, and have to be updated for changes in policy. Practitioners have to go through these files, which encode many hundreds or even thousands of rules in a markup language called XACML in order to understand what they can change. Even for technically trained experts, these files are difficult to read:

Example XACML-encoded policy

Example taken from XACML 3.0 core specifications, Section 4.1.1, p25”.

Morisset’s project studied visualising these using different options such as maps, user roles, permissions, and multilateral grids: making complex policies easier to understand at a glance should mean fewer errors to leave networks vulnerable. An online demonstration shows the design the group came up with, an ongoing effort called VisABAC, for the visualisation of attribute based access control policies, and a test for visitors to take to help assess the effectiveness of these design changes. A significant difficulty for the project is that there is no benchmark for reading access control policies and therefore no way to answer the simple question: does this approach work to improve the situation or not? Morisset is hoping RISCS participants will be able to help answer this question.

In the meantime, the researchers conducted a test in which they recruited 32 students, gave them the tool, identified the policy, and asked them to find the attributes. The results suggested that graphics are helpful with new policies but tend to be ignored once people have formed a mental model of how the policy works.

Results of Morisset's experiments

Results of early visualization experiments

For future work, Morisset wants to:

  • consider helping security experts;
  • consider the general problem of understanding access control;
  • integrate multiple and appropriate visualisation techniques;
  • fully integrate with XACML and role-based access control.

Morisset also hopes to be able to use these designs to extend the ability to understand access control policies to non-technical people.

In comments, Angela Sasse noted that her group is finding that companies are increasingly delegating access control to people with no technical training – department managers pass the job on to their PAs or to project managers.

Why Johnny doesn’t write secure software

Awais Rashid

Awais Rashid

The aim of the three-year EPSRC-funded Why Johnny Doesn’t Write Secure Software project, which began in April 2017, Awais Rashid (Lancaster University) explained to the June 2017 RISCS meeting, is to develop an empirically grounded theory of secure software development by the masses. The project’s collaborators include others at Lancaster University: Charles Weir, John Towse, and newcomer Dirk van Linden. From elsewhere, it includes Pauline Anthonysamy (Google Switzerland); Bashar Nuseibeh, Marian Petre, and Thein Tun (Open University); Mark Levine (Exeter); Mira Mezini (ITU Darmstadt), Elisa Bertino (Purdue); Brian Fitzgerald (Lero); Jim Herbsleb (Carnegie Mellon); Shinichi Honiden (National Institute of Informatics, Japan). This project has close links to the complementary Motivating Jenny to Write Secure Software project.

The last decade has seen a massive democratisation of how software is developed. In the early days of the software industry, a would-be programmer would pursue a university degree, learn software development, and then work in a software house. With recent developments such as the Arduino, the Raspberry Pi, mobile phone apps, and the Internet of Things, virtually anyone may become a developer writing software that is then deployed to people around the world. “Johnny” may be working in a software house or may equally be working in their own time from their living room on software that comes into contact with myriad other systems around the world on a regular basis. How does that person think about security? What decisions do they make, and what drives them? This project will study a range of software in apps and devices that captures the range of “Johnnies” actually engaged in writing software in today’s world.

The project seeks to answer three main questions:

  • What typical classes of security vulnerabilities arise from developers’ mistakes?
  • Why do these mistakes occur? Are the APIs so complicated to use that they produce mistakes, as suggested by recent work from Darmstadt. Are there other factors, such as their own misconceptions about security and how the software they write is supposed to handle it?
  • How may we mitigate these issues and promote secure behaviours?

The project’s first objective is to characterise developers’ approach to producing secure software by examining the artefacts produced and eliciting the developers’ awareness, attitudes, and assumptions about security. Do they think it’s someone else’s job? Do they care about security? Rashid suspects the project team will find a range of responses: some will care, some won’t; some will fail because the tools they are given make it hard to do secure programming. All of this will make it possible to determine how developers’ assumptions, behaviours, and awareness relate to the mistakes that appear in their software.

Graph of human behavior, security, and interventions

A schematic rendering of three degrees of secure software development: developers’ personal characteristics; those characteristics’ associated vulnerabilities in software; and the degrees of intervention to mitigate against them.

Next, the project will investigate the factors that affect developers’ security behaviours. The researchers seek to understand not only what their security design strategies are, but also to mitigate their biases and accommodate constraints such as pressure to meet market deadlines. Many apps have very short lifetimes; these are constraints that need to be understood. Based on this work, the project hopes to develop and evaluate a range of cost-effective interventions for steering developers away from poor security design decisions, taking into account both the kinds of vulnerabilities to be avoided and the types of behaviour to be discouraged.

Earlier work studying developers’ approach to error detection and recovery by Tamara Lopez and Marian Petre (Open University) ethnographic analysis of how developers work found three main stages of error detection and recovery. First: detect that something has gone wrong. Second: identify what’s wrong. Third: Undo the effects. In this context, errors can be beneficial because they show something has gone wrong.

With James Noble (Victoria University), Weir and Rashid have carried out complementary work to understand how developers learn about security and what encourages good security behaviour. This research found a pattern in the many interviews conducted with experienced people in industrial secure software development: challenges to what developers do encouraged them to engage with security. These challenges come from many directions: automated testing tools; pentesters and security experts; product managers; feedback from end users; the team’s own brainstorming sessions; and discussions with other development teams. All of these help developers think more about security and how to embed it in software.

The project hopes to build on this prior work as well as a small grant recently completed by Weir studying effective ways to intervene. Developers, Rashid concluded, do need our help. The project is eager to engage with others, receive critical feedback, and populate the space. Those interested can contact the project at contact@writingsecuresoftware.org.

The short discussion that followed raised the issue of sampling bias and how to engage people who are completely uninterested in security, an issue the project team has debated and understands depends on sampling correctly. The design of the libraries developers use is often unhelpfully (and accidentally) complex; the project hopes to understand developers’ strategies. Standard ways of programming might encourage or discourage good practice in this area. Cryptographic libraries and APIs are in particular not easy to use. The practices of open source developers, who have relationships within and across teams, might lend themselves to specific kinds of software features, though this also leads to the question of how group identity influences specific behaviours. Finally, the possibility of regulation was raised, but this appears impractical when all sorts of people are developing software all over the world. Future projections include the idea of programmable cities, an enterprise far too large to be handled by a few people in a few organisations. Giving developers a set of requirements that are too complicated to complete won’t help the way they work.

Helen Sharp: Motivating Jenny to write secure software

Helen Sharp

Helen Sharp

Open University professor Helen Sharp‘s talk at the June 2017 RISCS meeting presented the Motivating Jenny project. She began by noting that she knows very little about security. However, she knows a lot about software and its community and culture from studying software professionals, how they collaborate, and how they work with users, as well as different development methods. There are close links between this project and the complementary Why Johnny Doesn’t Write Secure Software project, particularly in terms of the researchers involved, but the two were developed separately. Funded by NCSC as part of RISCS, Motivating Jenny will be supported by academic and practitioner collaborators in the UK, Ireland, Japan, and Brazil.

Sharp, a newcomer to RISCS, has a background in software engineering; earlier in her career she developed software for large banks and other firms in the City of London. The software engineering group based at the OU brings together expertise in security, privacy, and digital forensics, as well as human behaviour. For the Motivating Jenny project, this combination is enhanced by experience in qualitative practice-based research, in which Sharp and researcher Tamara Lopez (Open University) have expertise. A crucial element is observing subjects in the real environment they work in every day as they perform the real tasks they are required to complete.

For the last ten years, Sharp has been looking at motivation in software engineering. Sharp has conducted studies on professional developers both in offices and working remotely. Although software development is thought of as a lonely, solitary profession, particularly for those who work online, in fact it involves a lot of online collaboration. “They have a very wide community behind their screens.”

There are many ideas about motivation based on the notion that people who are happy are more motivated. Sharp cited, for example, Daniel H. Pink’s Drive, which prescribes autonomy, mastery, and purpose; J.S. Adams’ fairness-based equity theory; the work of Teresa Amibile, whose studies of professionals led her to propose the progress principle; psychologist Abraham Maslow’s hierarchy of needs; and Frederick Herzberg’s two-factor theory, which posits the interplay of positive and negative factors. But a key question is, motivation to do what? Sharp’s work for the last decade has sought to understand what motivates software engineers to be software engineers and to do a good job. What do they enjoy? Why do they stay in the job? The answers are not always obvious. One developer she met had taken a 25% pay cut in order to move to a business that was using cutting-edge technology.

Based on a systematic literature review, the researchers developed a model of motivation in software engineering – but many aspects of it are contested. Partly, this is because software development has changed substantially from the time when a lot of this research was done, as has the environment in which software is written. The researchers are in the process of developing a new model for motivation and will incorporate these elements into the background that feeds into the Motivating Jenny project.

Helen Sharp's motivation graph

Motivation in Software Engineering (Helen Sharp)

The NCSC’s developer-centred security research call had four questions:

  • What does the developer profession look like currently?
  • How can we improve the tools that developers use?
  • How can the security culture in the developer community be improved?
  • How can we motivate developers to care more about security?

Based on their background and taking motivation as the overarching framework, the research team hopes to provide some input into all four of these questions by investigating what motivates developers to do secure coding. The project focuses on developers who are not security specialists. The project is working with two companies. One is a progressive small company that has just started to say it needs to understand security. The second does good coding but hasn’t considered security at all; it is interested in motivation. The project’s outputs will include a pack of materials to communicate to the communities of professional developers. One thing that does motivate developers a lot is talking to others, and peer recognition. Status within the profession is really important, and developers pick up new ideas such as agile development or object-oriented programming because their peers have. Why, therefore, aren’t security principles and practices used effectively? In Sharp’s experience, developers want to do a good job, so if they’re not using these principles and practices there must be a reason. Community and culture are vital influences on developer behaviour, so the question is how to seed the community and bring more people into the practice of writing secure code.

The project has three research questions and hypotheses:

  • What motivates developers? Their working hypothesis includes peer comparison, communities of practice, experience of failures, and knowing the impact their work has on the lives of their end users. What doesn’t work, based on the literature: financial incentives beyond the short term, policies, and general awareness.
  • How do we develop and sustain a culture of security? The project will draw on cultural transmission to understand how to ensure the culture of secure coding spreads once it’s been seeded. Other motivators include the impact on end users and problem-solving.
  • How can we facilitate community building for practices and technologies? The project will use interventions using motivational and cultural factors and engage practitioners. For the latter aspect, the project is seeking someone anchored in the profession to help them get into and build the right communities of practice, local groups, and online communities.

The project’s research activities will include:

  • Analysing existing data sets such as the annual study of the techniques in use by agile developers to characterise sections of the profession;
  • Conducting ethnographic studies with practitioners to understand their current practices and identify security-based motivational factors that can be used to spread better practices, both offline and online;
  • Refining existing motivation model(s) with security-specific findings;
  • Using constrained task studies to develop recommendations regarding a variety of specific security practices and security technologies;
  • Using the results of those studies to package recommendations as free practitioner-friendly resource packs;
  • Promoting findings and engagement with wider developer community(ies);
  • Designing and deploying a survey to refine the project’s findings according to different UK and global settings, such as Japan and Brazil.

Questions raised the issue of the context in which developers work, such as intense pressure to get products to market, which might dampen professionals’ ability to adopt secure coding practices. However, the project’s focus is on trying to seed the community because Sharp’s studies have shown that professionals are motivated by what their community is doing. The different pressures on developers in different environments are not the same as motivational factors, which may include the reasons why someone chooses to work in a highly pressured situation.

The project is in its early stages, and the researchers welcome engagement and comments. Those interested should contact the project through helen.sharp@open.ac.uk.

Research portrait: Charles Weir

Charles Weir

Charles Weir

“I could easily have become an academic to start with,” says Charles Weir, by way of explaining how it is that relatively late in his life he’s publishing his first peer-reviewed journal articles. Weir’s long career in advancing software development is the backdrop to the Master’s degree he completed at Lancaster University in 2016 and his participation in the Why Johnny doesn’t write secure software” project. He recently completed an NCSC-funded small grant project on interventions to provide security support for developers.

Weir’s interest in secure software has developed over time through a series of career moves: he’s been a programmer and analyst; a consultant; the owner and manager of the Cumbria-based bespoke app development house Penrillian; and now he’s an academic researcher. Along the way, he was an early adopter of object-oriented programming, agile development, software patterns, and more recently secure app design when working on EE Cash on Tap, a predecessor to Android Pay. The consistent thread through all that, he says, is “the excitement of the bleeding edge, the new cutting-edge things that require you to really think things through and build things for the first time. I’m not good at repeating a task, and really like thinking things out the first time.”

Weir began his career with a physics degree from Cambridge, then, as he describes it, “went around the world with a rucksack”. On his return, he worked briefly for a computer retailer before joining Reuters’ new microprocessor group, where he had his first experience of teamwork. There, the “bleeding edge” he encountered included the BBC Micro, system design, one-way protocols, and, finally, object-oriented programming. After seven years, part of it in the US, he segued into consultancy, working for other companies in Chicago and learning more about object-oriented software. Back in the UK he joined Object Designers, a virtual consultancy company led by OO pioneers Steve Cook and John Daniels. Here everyone worked from home, visited the companies they worked with, and met up about once a month. The consultancy, he says, “gave me a chance to do some of the stuff that had been a bit theory.”

One of Weir’s customers during this period was Symbian, then hoping to conquer the world with its mobile operating system, EPOC, and when it came time to close down the consultancy Weir spent three days a week helping Symbian’s internal teams design elements of the software destined for its new phones. The release of the iPhone in 2006 ended Symbian’s hopes of dominating the mobile operating system market, but it was a forward-thinking company: the mobile landscape Symbian CEO Colly Myers described in 2000 is remarkably accurate today.

A particular technique Weir dates to that time is one he calls “Captain Oates”, after the Antarctic explorer Lawrence Oates, who famously sacrificed himself in the hopes of saving his fellow explorers. In software, Weir’s “Captain Oates” terminates when memory is running short so that other apps can keep running. This technique is now frequently used, typically as part of the operating system.

“Captain Oates” surfaced while writing the 2000 book Small Memory Software with James Noble, whom he’d met at conferences on software patterns, which came to public attention in 1994 with the publication of the book Design Patterns: Elements of Reusable Object-Oriented Software. Based on this idea of reusable design architectures coupled with their backgrounds writing software for very small devices, Weir and Noble “dug up a whole series of patterns.” As they went along, they found that these applied not only to the small, memory-constrained, matchbox-sized computers they were used to but bigger systems that had to cope with memory-taxing amounts of data, such as the system that collects satellite data for NASA.

In 2002 Weir set up the bespoke app development house Penrillian, which created apps for Vodafone – in particular, the software for the Vodafone mobile broadband dongles – and to a lesser extent for other network operators. His commercial arrangements with Symbian gave him access to the company’s source code, enabling Penrillian to do work others couldn’t.

In 1998, Weir wrote a short guide, Patterns for Designing in Teams (PDF), intended to help developers working in teams improve their work. While the guide isn’t about security specifically, it provides a basis for thinking about how to incorporate security into the design process.

“I’m very interested in teams,” Weir says. “Because I’m not naturally an easy team player, I find the intellectual question of what makes a team work very interesting. I can be fascinated by it even though naturally I’m not particularly good at it – I can be more analytic and see things that people who take them for granted just don’t.” This aligns nicely with his work as a consultant, which taught him to approach every room as if he were the stupidest person in it. “Because you usually are, in terms of what they know about. But every now and then there might be something you can help them with.”

By 2012, Weir was finding that “The market for smart people in the UK doing mobile apps had really gone.” All that work was going offshore, so Weir looked around for something that wouldn’t soon follow suit, and landed on payment apps. EE Cash on Tap was a precursor to today’s Apple/Android Pay, though the commercial and technical complexity of EE’s approach meant it never became mainstream. It was this project that sparked Weir’s interest in security: “I realised there were going to be large amounts of money floating around, and if I didn’t do a reasonable job I could be liable for all that money. That was the point at which I reached out a hand for something like the “Dummies Guide to Software Security for Programmers” and found there was a gap in the shelf, and realised that the more I looked into it the less I could find anyone supporting anybody doing this.”

Co-author James Noble suggested he get in touch with Awais Rashid, and in 2015 Weir began his masters-by-research at Lancaster. The many interviews he conducted with developers and others – “I shamelessly used connections from my previous work” – led to his paper, I’d Like to Have an Argument, Please (PDF), in which he finds that secure software development is helped when the developers are challenged from multiple directions and made to think. The paper has been well-received, and led to other peer-reviewed papers. One of these studies the differences among the responses and concludes that secure app software development is at a very early stage, and another for the FSE conference suggests using games as a teaching tool because developers are so reluctant to read books – “Angry Birds meets software security”.

What surprised him most in this work, which was brought out in the “Argument” paper, is the wide range of approaches and advice developers were using. “I had sort of assumed that there was some secret out there that everyone knew except me. It turns out there wasn’t.” While there is a lot of material to tell developers the ten top bugs of the week, what mistakes not to make, or how to use specific operating system security features, there still isn’t much telling developers how to do secure software in general, particularly in the mobile phone space. Worse, what there is tends to be rule-bound and is generally loathed by developers. Around 2010, he says, there was a shift away from the secure development processes of the past, led by Gary McGraw, who moved to measure whether security had been achieved without caring about how people got there. “He was the only person I came across who had written the book I was looking for, but it wasn’t very digestible from a developer point of view.” One of the difficulties in developing EE Cash, for example, was being told – wrongly, as it turned out – that various things couldn’t be done because they would violate EMV or PCI rules. Finding out that handed down constraints like these are excuses rather than essentials is enough to make any developer into a suspicious refusenik.

If there were magic answers to this conundrum, academic research seemed like the place to start looking. “My goal now is to change the world in one particular way, which is to get the software people write to be that small bit more secure.”

Informal support networks

Ivan Flechais

Ivan Flechais

Oxford University associate professor Ivan Flechais and Norbert Nthala investigated social relationships and their role in home data security, funded by a small grant from NCSC.

The reason for studying the home is that not only is there increasing internet use but both personal and home uses of work and non-work services are growing, and the growth in value that represents is observably attracting people who want to attack those systems, devices, and data. In 2007, Symantec said home users accounted for 95% of all targeted attacks. Originally, the goal was to extract value from home users; more recently these attacks use the home as a stepping stone to attack others, as in the Christmas 2014 attacks using compromised home devices against Xbox Live and the PlayStation Network, and the October 2016 DynDNS hack. This trend means we are at risk from homes and more at risk in our homes. Unlike most organisations, homes lack explicit support dedicated to mitigating threats or keeping software up to date, or procuring and managing end of life. When people need help, who do they call? This research aimed to work out what happens when home users are faced with these issues.

The state of the art in home data security is generally poor. Most of it is automated patching, antivirus (which many people distrust), and a reliance on raising awareness. Awareness will never be an effective strategy for helping all the people in the population of any country. It can’t be the primary thing people rely on – and there’s plenty of evidence to support that.

The study had two phases. The first was a qualitative exploration of how people make decisions based on 50 semi-structured interviews with UK home users that focused on security decision-making and were analysed using Grounded Theory. The second phase used those results to inform a quantitative study to validate and generalise the qualitative findings. The researchers are still studying the statistics derived from 1,032 UK residents.

The researchers found that although the technology industry tends to assume that the owner of a device is also its competent administrator, this is generally not true for home users. The result is a lot of informal networking. Those seeking help look for competence first and foremost – but not necessarily real competence so much as perceived competence. These users also list trust and continuity of care. People strongly report wanting a consistent source of adequate, personalised advice. Raising awareness generally targets the whole population, but what people actually seek is targeted and individualised help that’s consistent over time. People demonstrate a strong sense of responsibility for the advice they give, and the consequences if it’s wrong. How do we know what good-quality advice looks like, particularly in an informal setting?

In his survey of 1,032 participants, Flechais and Nthala find that people leverage their existing informal and social relationships. The most frequently-named choice is someone who works in data security, closely followed by those who have studied it. Third, they name people who have more experience than they have working with technical devices and services. The rest of the list: people who have experienced a prior data security incident; have taken a technical course; works for a technical company; has a technical job. This perception of competence includes the likelihood that someone will copy or adapt another person’s security practices if they’re perceived to be more competent – an idea of relative competence that’s interesting – or accept or offer unsolicited security advice.

People also crave trust. The choice of a source of advice, a particular service, and the extent of sharing devices and services are all influenced by trust. People respond to cues such as brand recognition, social relationships, and visual cues such as the SSL padlock, HTTPS, and logos.

Continuity of care – continuing availability influences people’s preferences with respect to sources of help. When seeking help, they will pick friends over relatives, though not by much, then work mates, then service providers, and finally an IT repair shop. People exploit their social networks, in other words, an intriguing choice since the people they consult might be completely incompetent, and their own incompetence in assessing competence is also an issue. Even so, they tend to choose the informal options first.

Flechais and Nthala found there is a complex culture around responsibility and duty of care. Home users take initiatives to protect themselves, but some also assume responsibility for others, though they are far more likely to offer unsolicited advice to family members than to friends. Those who offer advice feel the need to make good on situations where they have offered bad advice, a responsibility that’s determined by the social relationship.

To evaluate the quality of the security advice they’re given, home users rely on their perception that it’s something that a competent person does or recommends. Less reliably, however, they also fall prey to survival/outcome bias: nothing bad has happened, therefore it must be good. This fallacy – just because you haven’t been breached doesn’t mean you’re doing the right thing – was found in interviews, though not confirmed in the survey because of the difficulty of testing a bias. This bias underpins inaction, however, and is worth exploring in greater detail.

In comments, Angela Sasse noted that she and Monica Whitty are finding in the Detection and Prevention of Mass-Marketing Fraud project and in work with Gumtree that a lot of users exchange (often not very good) advice in the forums. Another small grant project interviewed people who have just bought a new laptop or phone on the subject of security, and this project has found a surprising number of people who pay someone to come round once a quarter or once a month to perform updates and check their systems. How qualified these aides are is unknown.