Why Johnny doesn’t write secure software

Awais Rashid

Awais Rashid

The aim of the three-year EPSRC-funded Why Johnny Doesn’t Write Secure Software project, which began in April 2017, Awais Rashid (Lancaster University) explained to the June 2017 RISCS meeting, is to develop an empirically grounded theory of secure software development by the masses. The project’s collaborators include others at Lancaster University: Charles Weir, John Towse, and newcomer Dirk van Linden. From elsewhere, it includes Pauline Anthonysamy (Google Switzerland); Bashar Nuseibeh, Marian Petre, and Thein Tun (Open University); Mark Levine (Exeter); Mira Mezini (ITU Darmstadt), Elisa Bertino (Purdue); Brian Fitzgerald (Lero); Jim Herbsleb (Carnegie Mellon); Shinichi Honiden (National Institute of Informatics, Japan). This project has close links to the complementary Motivating Jenny to Write Secure Software project.

The last decade has seen a massive democratisation of how software is developed. In the early days of the software industry, a would-be programmer would pursue a university degree, learn software development, and then work in a software house. With recent developments such as the Arduino, the Raspberry Pi, mobile phone apps, and the Internet of Things, virtually anyone may become a developer writing software that is then deployed to people around the world. “Johnny” may be working in a software house or may equally be working in their own time from their living room on software that comes into contact with myriad other systems around the world on a regular basis. How does that person think about security? What decisions do they make, and what drives them? This project will study a range of software in apps and devices that captures the range of “Johnnies” actually engaged in writing software in today’s world.

The project seeks to answer three main questions:

  • What typical classes of security vulnerabilities arise from developers’ mistakes?
  • Why do these mistakes occur? Are the APIs so complicated to use that they produce mistakes, as suggested by recent work from Darmstadt. Are there other factors, such as their own misconceptions about security and how the software they write is supposed to handle it?
  • How may we mitigate these issues and promote secure behaviours?

The project’s first objective is to characterise developers’ approach to producing secure software by examining the artefacts produced and eliciting the developers’ awareness, attitudes, and assumptions about security. Do they think it’s someone else’s job? Do they care about security? Rashid suspects the project team will find a range of responses: some will care, some won’t; some will fail because the tools they are given make it hard to do secure programming. All of this will make it possible to determine how developers’ assumptions, behaviours, and awareness relate to the mistakes that appear in their software.

Graph of human behavior, security, and interventions

A schematic rendering of three degrees of secure software development: developers’ personal characteristics; those characteristics’ associated vulnerabilities in software; and the degrees of intervention to mitigate against them.

Next, the project will investigate the factors that affect developers’ security behaviours. The researchers seek to understand not only what their security design strategies are, but also to mitigate their biases and accommodate constraints such as pressure to meet market deadlines. Many apps have very short lifetimes; these are constraints that need to be understood. Based on this work, the project hopes to develop and evaluate a range of cost-effective interventions for steering developers away from poor security design decisions, taking into account both the kinds of vulnerabilities to be avoided and the types of behaviour to be discouraged.

Earlier work studying developers’ approach to error detection and recovery by Tamara Lopez and Marian Petre (Open University) ethnographic analysis of how developers work found three main stages of error detection and recovery. First: detect that something has gone wrong. Second: identify what’s wrong. Third: Undo the effects. In this context, errors can be beneficial because they show something has gone wrong.

With James Noble (Victoria University), Weir and Rashid have carried out complementary work to understand how developers learn about security and what encourages good security behaviour. This research found a pattern in the many interviews conducted with experienced people in industrial secure software development: challenges to what developers do encouraged them to engage with security. These challenges come from many directions: automated testing tools; pentesters and security experts; product managers; feedback from end users; the team’s own brainstorming sessions; and discussions with other development teams. All of these help developers think more about security and how to embed it in software.

The project hopes to build on this prior work as well as a small grant recently completed by Weir studying effective ways to intervene. Developers, Rashid concluded, do need our help. The project is eager to engage with others, receive critical feedback, and populate the space. Those interested can contact the project at contact@writingsecuresoftware.org.

The short discussion that followed raised the issue of sampling bias and how to engage people who are completely uninterested in security, an issue the project team has debated and understands depends on sampling correctly. The design of the libraries developers use is often unhelpfully (and accidentally) complex; the project hopes to understand developers’ strategies. Standard ways of programming might encourage or discourage good practice in this area. Cryptographic libraries and APIs are in particular not easy to use. The practices of open source developers, who have relationships within and across teams, might lend themselves to specific kinds of software features, though this also leads to the question of how group identity influences specific behaviours. Finally, the possibility of regulation was raised, but this appears impractical when all sorts of people are developing software all over the world. Future projections include the idea of programmable cities, an enterprise far too large to be handled by a few people in a few organisations. Giving developers a set of requirements that are too complicated to complete won’t help the way they work.

Helen Sharp: Motivating Jenny to write secure software

Helen Sharp

Helen Sharp

Open University professor Helen Sharp‘s talk at the June 2017 RISCS meeting presented the Motivating Jenny project. She began by noting that she knows very little about security. However, she knows a lot about software and its community and culture from studying software professionals, how they collaborate, and how they work with users, as well as different development methods. There are close links between this project and the complementary Why Johnny Doesn’t Write Secure Software project, particularly in terms of the researchers involved, but the two were developed separately. Funded by NCSC as part of RISCS, Motivating Jenny will be supported by academic and practitioner collaborators in the UK, Ireland, Japan, and Brazil.

Sharp, a newcomer to RISCS, has a background in software engineering; earlier in her career she developed software for large banks and other firms in the City of London. The software engineering group based at the OU brings together expertise in security, privacy, and digital forensics, as well as human behaviour. For the Motivating Jenny project, this combination is enhanced by experience in qualitative practice-based research, in which Sharp and researcher Tamara Lopez (Open University) have expertise. A crucial element is observing subjects in the real environment they work in every day as they perform the real tasks they are required to complete.

For the last ten years, Sharp has been looking at motivation in software engineering. Sharp has conducted studies on professional developers both in offices and working remotely. Although software development is thought of as a lonely, solitary profession, particularly for those who work online, in fact it involves a lot of online collaboration. “They have a very wide community behind their screens.”

There are many ideas about motivation based on the notion that people who are happy are more motivated. Sharp cited, for example, Daniel H. Pink’s Drive, which prescribes autonomy, mastery, and purpose; J.S. Adams’ fairness-based equity theory; the work of Teresa Amibile, whose studies of professionals led her to propose the progress principle; psychologist Abraham Maslow’s hierarchy of needs; and Frederick Herzberg’s two-factor theory, which posits the interplay of positive and negative factors. But a key question is, motivation to do what? Sharp’s work for the last decade has sought to understand what motivates software engineers to be software engineers and to do a good job. What do they enjoy? Why do they stay in the job? The answers are not always obvious. One developer she met had taken a 25% pay cut in order to move to a business that was using cutting-edge technology.

Based on a systematic literature review, the researchers developed a model of motivation in software engineering – but many aspects of it are contested. Partly, this is because software development has changed substantially from the time when a lot of this research was done, as has the environment in which software is written. The researchers are in the process of developing a new model for motivation and will incorporate these elements into the background that feeds into the Motivating Jenny project.

Helen Sharp's motivation graph

Motivation in Software Engineering (Helen Sharp)

The NCSC’s developer-centred security research call had four questions:

  • What does the developer profession look like currently?
  • How can we improve the tools that developers use?
  • How can the security culture in the developer community be improved?
  • How can we motivate developers to care more about security?

Based on their background and taking motivation as the overarching framework, the research team hopes to provide some input into all four of these questions by investigating what motivates developers to do secure coding. The project focuses on developers who are not security specialists. The project is working with two companies. One is a progressive small company that has just started to say it needs to understand security. The second does good coding but hasn’t considered security at all; it is interested in motivation. The project’s outputs will include a pack of materials to communicate to the communities of professional developers. One thing that does motivate developers a lot is talking to others, and peer recognition. Status within the profession is really important, and developers pick up new ideas such as agile development or object-oriented programming because their peers have. Why, therefore, aren’t security principles and practices used effectively? In Sharp’s experience, developers want to do a good job, so if they’re not using these principles and practices there must be a reason. Community and culture are vital influences on developer behaviour, so the question is how to seed the community and bring more people into the practice of writing secure code.

The project has three research questions and hypotheses:

  • What motivates developers? Their working hypothesis includes peer comparison, communities of practice, experience of failures, and knowing the impact their work has on the lives of their end users. What doesn’t work, based on the literature: financial incentives beyond the short term, policies, and general awareness.
  • How do we develop and sustain a culture of security? The project will draw on cultural transmission to understand how to ensure the culture of secure coding spreads once it’s been seeded. Other motivators include the impact on end users and problem-solving.
  • How can we facilitate community building for practices and technologies? The project will use interventions using motivational and cultural factors and engage practitioners. For the latter aspect, the project is seeking someone anchored in the profession to help them get into and build the right communities of practice, local groups, and online communities.

The project’s research activities will include:

  • Analysing existing data sets such as the annual study of the techniques in use by agile developers to characterise sections of the profession;
  • Conducting ethnographic studies with practitioners to understand their current practices and identify security-based motivational factors that can be used to spread better practices, both offline and online;
  • Refining existing motivation model(s) with security-specific findings;
  • Using constrained task studies to develop recommendations regarding a variety of specific security practices and security technologies;
  • Using the results of those studies to package recommendations as free practitioner-friendly resource packs;
  • Promoting findings and engagement with wider developer community(ies);
  • Designing and deploying a survey to refine the project’s findings according to different UK and global settings, such as Japan and Brazil.

Questions raised the issue of the context in which developers work, such as intense pressure to get products to market, which might dampen professionals’ ability to adopt secure coding practices. However, the project’s focus is on trying to seed the community because Sharp’s studies have shown that professionals are motivated by what their community is doing. The different pressures on developers in different environments are not the same as motivational factors, which may include the reasons why someone chooses to work in a highly pressured situation.

The project is in its early stages, and the researchers welcome engagement and comments. Those interested should contact the project through helen.sharp@open.ac.uk.

Research portrait: Charles Weir

Charles Weir

Charles Weir

“I could easily have become an academic to start with,” says Charles Weir, by way of explaining how it is that relatively late in his life he’s publishing his first peer-reviewed journal articles. Weir’s long career in advancing software development is the backdrop to the Master’s degree he completed at Lancaster University in 2016 and his participation in the Why Johnny doesn’t write secure software” project. He recently completed an NCSC-funded small grant project on interventions to provide security support for developers.

Weir’s interest in secure software has developed over time through a series of career moves: he’s been a programmer and analyst; a consultant; the owner and manager of the Cumbria-based bespoke app development house Penrillian; and now he’s an academic researcher. Along the way, he was an early adopter of object-oriented programming, agile development, software patterns, and more recently secure app design when working on EE Cash on Tap, a predecessor to Android Pay. The consistent thread through all that, he says, is “the excitement of the bleeding edge, the new cutting-edge things that require you to really think things through and build things for the first time. I’m not good at repeating a task, and really like thinking things out the first time.”

Weir began his career with a physics degree from Cambridge, then, as he describes it, “went around the world with a rucksack”. On his return, he worked briefly for a computer retailer before joining Reuters’ new microprocessor group, where he had his first experience of teamwork. There, the “bleeding edge” he encountered included the BBC Micro, system design, one-way protocols, and, finally, object-oriented programming. After seven years, part of it in the US, he segued into consultancy, working for other companies in Chicago and learning more about object-oriented software. Back in the UK he joined Object Designers, a virtual consultancy company led by OO pioneers Steve Cook and John Daniels. Here everyone worked from home, visited the companies they worked with, and met up about once a month. The consultancy, he says, “gave me a chance to do some of the stuff that had been a bit theory.”

One of Weir’s customers during this period was Symbian, then hoping to conquer the world with its mobile operating system, EPOC, and when it came time to close down the consultancy Weir spent three days a week helping Symbian’s internal teams design elements of the software destined for its new phones. The release of the iPhone in 2006 ended Symbian’s hopes of dominating the mobile operating system market, but it was a forward-thinking company: the mobile landscape Symbian CEO Colly Myers described in 2000 is remarkably accurate today.

A particular technique Weir dates to that time is one he calls “Captain Oates”, after the Antarctic explorer Lawrence Oates, who famously sacrificed himself in the hopes of saving his fellow explorers. In software, Weir’s “Captain Oates” terminates when memory is running short so that other apps can keep running. This technique is now frequently used, typically as part of the operating system.

“Captain Oates” surfaced while writing the 2000 book Small Memory Software with James Noble, whom he’d met at conferences on software patterns, which came to public attention in 1994 with the publication of the book Design Patterns: Elements of Reusable Object-Oriented Software. Based on this idea of reusable design architectures coupled with their backgrounds writing software for very small devices, Weir and Noble “dug up a whole series of patterns.” As they went along, they found that these applied not only to the small, memory-constrained, matchbox-sized computers they were used to but bigger systems that had to cope with memory-taxing amounts of data, such as the system that collects satellite data for NASA.

In 2002 Weir set up the bespoke app development house Penrillian, which created apps for Vodafone – in particular, the software for the Vodafone mobile broadband dongles – and to a lesser extent for other network operators. His commercial arrangements with Symbian gave him access to the company’s source code, enabling Penrillian to do work others couldn’t.

In 1998, Weir wrote a short guide, Patterns for Designing in Teams (PDF), intended to help developers working in teams improve their work. While the guide isn’t about security specifically, it provides a basis for thinking about how to incorporate security into the design process.

“I’m very interested in teams,” Weir says. “Because I’m not naturally an easy team player, I find the intellectual question of what makes a team work very interesting. I can be fascinated by it even though naturally I’m not particularly good at it – I can be more analytic and see things that people who take them for granted just don’t.” This aligns nicely with his work as a consultant, which taught him to approach every room as if he were the stupidest person in it. “Because you usually are, in terms of what they know about. But every now and then there might be something you can help them with.”

By 2012, Weir was finding that “The market for smart people in the UK doing mobile apps had really gone.” All that work was going offshore, so Weir looked around for something that wouldn’t soon follow suit, and landed on payment apps. EE Cash on Tap was a precursor to today’s Apple/Android Pay, though the commercial and technical complexity of EE’s approach meant it never became mainstream. It was this project that sparked Weir’s interest in security: “I realised there were going to be large amounts of money floating around, and if I didn’t do a reasonable job I could be liable for all that money. That was the point at which I reached out a hand for something like the “Dummies Guide to Software Security for Programmers” and found there was a gap in the shelf, and realised that the more I looked into it the less I could find anyone supporting anybody doing this.”

Co-author James Noble suggested he get in touch with Awais Rashid, and in 2015 Weir began his masters-by-research at Lancaster. The many interviews he conducted with developers and others – “I shamelessly used connections from my previous work” – led to his paper, I’d Like to Have an Argument, Please (PDF), in which he finds that secure software development is helped when the developers are challenged from multiple directions and made to think. The paper has been well-received, and led to other peer-reviewed papers. One of these studies the differences among the responses and concludes that secure app software development is at a very early stage, and another for the FSE conference suggests using games as a teaching tool because developers are so reluctant to read books – “Angry Birds meets software security”.

What surprised him most in this work, which was brought out in the “Argument” paper, is the wide range of approaches and advice developers were using. “I had sort of assumed that there was some secret out there that everyone knew except me. It turns out there wasn’t.” While there is a lot of material to tell developers the ten top bugs of the week, what mistakes not to make, or how to use specific operating system security features, there still isn’t much telling developers how to do secure software in general, particularly in the mobile phone space. Worse, what there is tends to be rule-bound and is generally loathed by developers. Around 2010, he says, there was a shift away from the secure development processes of the past, led by Gary McGraw, who moved to measure whether security had been achieved without caring about how people got there. “He was the only person I came across who had written the book I was looking for, but it wasn’t very digestible from a developer point of view.” One of the difficulties in developing EE Cash, for example, was being told – wrongly, as it turned out – that various things couldn’t be done because they would violate EMV or PCI rules. Finding out that handed down constraints like these are excuses rather than essentials is enough to make any developer into a suspicious refusenik.

If there were magic answers to this conundrum, academic research seemed like the place to start looking. “My goal now is to change the world in one particular way, which is to get the software people write to be that small bit more secure.”

Informal support networks

Ivan Flechais

Ivan Flechais

Oxford University associate professor Ivan Flechais and Norbert Nthala investigated social relationships and their role in home data security, funded by a small grant from NCSC.

The reason for studying the home is that not only is there increasing internet use but both personal and home uses of work and non-work services are growing, and the growth in value that represents is observably attracting people who want to attack those systems, devices, and data. In 2007, Symantec said home users accounted for 95% of all targeted attacks. Originally, the goal was to extract value from home users; more recently these attacks use the home as a stepping stone to attack others, as in the Christmas 2014 attacks using compromised home devices against Xbox Live and the PlayStation Network, and the October 2016 DynDNS hack. This trend means we are at risk from homes and more at risk in our homes. Unlike most organisations, homes lack explicit support dedicated to mitigating threats or keeping software up to date, or procuring and managing end of life. When people need help, who do they call? This research aimed to work out what happens when home users are faced with these issues.

The state of the art in home data security is generally poor. Most of it is automated patching, antivirus (which many people distrust), and a reliance on raising awareness. Awareness will never be an effective strategy for helping all the people in the population of any country. It can’t be the primary thing people rely on – and there’s plenty of evidence to support that.

The study had two phases. The first was a qualitative exploration of how people make decisions based on 50 semi-structured interviews with UK home users that focused on security decision-making and were analysed using Grounded Theory. The second phase used those results to inform a quantitative study to validate and generalise the qualitative findings. The researchers are still studying the statistics derived from 1,032 UK residents.

The researchers found that although the technology industry tends to assume that the owner of a device is also its competent administrator, this is generally not true for home users. The result is a lot of informal networking. Those seeking help look for competence first and foremost – but not necessarily real competence so much as perceived competence. These users also list trust and continuity of care. People strongly report wanting a consistent source of adequate, personalised advice. Raising awareness generally targets the whole population, but what people actually seek is targeted and individualised help that’s consistent over time. People demonstrate a strong sense of responsibility for the advice they give, and the consequences if it’s wrong. How do we know what good-quality advice looks like, particularly in an informal setting?

In his survey of 1,032 participants, Flechais and Nthala find that people leverage their existing informal and social relationships. The most frequently-named choice is someone who works in data security, closely followed by those who have studied it. Third, they name people who have more experience than they have working with technical devices and services. The rest of the list: people who have experienced a prior data security incident; have taken a technical course; works for a technical company; has a technical job. This perception of competence includes the likelihood that someone will copy or adapt another person’s security practices if they’re perceived to be more competent – an idea of relative competence that’s interesting – or accept or offer unsolicited security advice.

People also crave trust. The choice of a source of advice, a particular service, and the extent of sharing devices and services are all influenced by trust. People respond to cues such as brand recognition, social relationships, and visual cues such as the SSL padlock, HTTPS, and logos.

Continuity of care – continuing availability influences people’s preferences with respect to sources of help. When seeking help, they will pick friends over relatives, though not by much, then work mates, then service providers, and finally an IT repair shop. People exploit their social networks, in other words, an intriguing choice since the people they consult might be completely incompetent, and their own incompetence in assessing competence is also an issue. Even so, they tend to choose the informal options first.

Flechais and Nthala found there is a complex culture around responsibility and duty of care. Home users take initiatives to protect themselves, but some also assume responsibility for others, though they are far more likely to offer unsolicited advice to family members than to friends. Those who offer advice feel the need to make good on situations where they have offered bad advice, a responsibility that’s determined by the social relationship.

To evaluate the quality of the security advice they’re given, home users rely on their perception that it’s something that a competent person does or recommends. Less reliably, however, they also fall prey to survival/outcome bias: nothing bad has happened, therefore it must be good. This fallacy – just because you haven’t been breached doesn’t mean you’re doing the right thing – was found in interviews, though not confirmed in the survey because of the difficulty of testing a bias. This bias underpins inaction, however, and is worth exploring in greater detail.

In comments, Angela Sasse noted that she and Monica Whitty are finding in the Detection and Prevention of Mass-Marketing Fraud project and in work with Gumtree that a lot of users exchange (often not very good) advice in the forums. Another small grant project interviewed people who have just bought a new laptop or phone on the subject of security, and this project has found a surprising number of people who pay someone to come round once a quarter or once a month to perform updates and check their systems. How qualified these aides are is unknown.

Beyond dissemination

The overarching aim of this study by Rikke Bjerg Jensen and David Denney was to better understand how academics can demonstrate the impact of their cyber security research and move it beyond purely academic dissemination. This small grant project, funded by NCSC, was born of the researchers’ own frustrations when trying to determine the extent to which their DSTL-funded research into social media use by military personnel had fed into MoD policy and practice. Instead of finding answers, they were simply told to trust that the research and its findings would be taken seriously by military leadership and policy makers.

The dissemination study created an opportunity to speak to a wide range of stakeholders from both inside and outside academia and discuss expectations about how collaboration might facilitate better usage of academic research. The researchers expressed their concern that research findings tend to disappear into a vortex, which they call “The Void”. The issues they were interested in were well summed up by the CISO of a global organisation, cited in the presentation, who told them that academic research was generally not well disseminated outside of academic circles and did not reach him in a form that’s useful in the real world. Accordingly, they set out to find ways to present academic work that might foster greater impact. One simple idea was producing new forms of output, such as one-page summaries, a seemingly small thing but a big change from the usual 100-page report or technical article.

Jensen and Denney conducted a small group of interviews with stakeholders who had engaged with academics in previous research projects, asking what impact meant to them, how important it was, what it looks like, what their expectations were, what kinds of partnerships they saw as useful, and how to do things differently. Alongside that, they conducted a separate study on impact case studies submitted to the 2014 Research Excellence Framework (REF2014) where they used cyber security-related keywords to explore how research projects demonstrated impact. In the process of identifying impact from cyber security projects, they found that the way REF2014 categorises case studies is somewhat arbitrary. These two pieces of research exposed a profound split between non-academics, who want to understand from the outset what the effect of the research will be, and researchers, who feel that impact is too narrowly defined. For academics, navigating this difference is a challenge.

Their main findings:

  • Impact is a dynamic process that can and should occur at every stage of the research cycle;
  • Stakeholders’ expectations in relation to cyber security research were varied and sometimes conflicting;
  • The way impact was categorised and assessed in REF2014 appeared to be arbitrary, and assumes an agreed understanding of the meaning of “impact”;
  • Over-emphasising impact in cyber security research creates divisions between people-oriented and technical-oriented research.

It emerged in the interviews that “impact” is not a generic concept but a differentiated one. Several models were proposed by interviewees. A DSTL fellow proposed two options: a transactional model, in which stakeholders learn from the research when the findings are delivered, and a co-creation model, in which expertise is shared and participants learn from each other throughout. Crucially, which model is being followed needs to be specified at the outset. An external RCUK champion proposed four types of impact: pedagogical, in which the research is turned into teaching material; intellectual, the research influences policy-making and decisions; instrumental, the research delivers tools, capabilities, and techniques; and polemical, going public with the results when any attempt to demonstrate impact has failed. Of these, intellectual impact is the one that’s difficult to document. Polemic can be a high-risk strategy. Finally, a data analyst from the MoD offered a mnemonic checklist called “TEPID OIL”: training, equipment, personnel, infrastructure, doctrine (and policy), organisation, information, logistics. Using that model, impact has to be shown in all those categories.

The big question moving forward into more impact-driven research is the meaning of “impact” to various stakeholders. Academics use the notion of impact every day as if there’s a common meaning, but, as this small study shows, it’s much more nuanced. An additional finding that surfaced is that some stakeholders are feel exploited when, from their point of view, academics come in, take data, and disappear. A cultural change is necessary: researchers must build their relationships with stakeholders early on in the research cycle and on a basis of genuinely wanting to engage with the problems that have been identified by stakeholders.

Intervention

Charles Weir

Charles Weir

In his presentation at the June 2017 RISCS meeting, Charles Weir, a researcher at the University of Lancaster, outlined his work with Awais Rashid (Lancaster) and James Noble (Victoria University) studying ways to intervene to provide software developers with security support. The project, which is based at the University of Lancaster, is in its second year.

Weir’s research question: How can you intervene from outside to change what a developer does?

To tackle this, Weir interviewed a number of experts who had performed such (presumably successful) interventions. Eight strategies came out of these interviews.

  • Almost all interviewees mentioned hosting incentivisation workshops in the early stages of their projects. The consensus on the best approach was to scare, rather than nudge, developers, but to be sure to provide solutions.
  • Threat modelling.
  • Choice of components. For example, one penetration tester said that when testing Ruby on Rails systems guessing which components had been used made it easy to identify the most likely weaknesses.
  • Developer training.
  • Automated code analysis.
  • Pen testing. Weir noted that fewer – 30% to 40% – of those interviewed mentioned this than the researchers had expected.
  • Manual code review.
  • Continuous reminders through a drip feed of jokes, competitions, and nudges to keep security issues in the front of developers’ minds.

From there, Weir set out to determine which of these interventions was most worth pursuing. Strategically, the best ones to pick are those that are cheapest and easiest – that is, that cost the least, and require the least effort and discipline from the developers themselves.

Five of these options qualify. What surprised the researchers is that three of these are predominantly social changes to developers’ methods of working rather than technical changes to the code they produce. The three are: developing a threat model, motivational workshops, and continuous reminders. The other two low-cost but effective interventions are automated code analysis and informed choice of components. Of these, only static analysis is purely technical – though even that option requires developers to take note of the results it produces. The researchers therefore recommend focusing on these five. A fuller report is available.

In response to questions, Weir noted that although it might be tempting to conclude that developers ignore manual code review on the basis that it hasn’t been useful, he has found that it’s more often the case that these reviews are hard personally for developers, and therefore tend to be avoided if possible.

Threat modelling is more effective when it focuses on assets rather than attacker models, which are difficult for developers to understand. Thus, developers can focus on things attackers might want to steal, things that need protection, and stepping stones to further attacks, such as login credentials and reputation. All of these recommendations are well-understood; what’s hard is getting development teams to pick them up.

Preventing phishing won’t stop ransomware spreading

Steven J. Murdoch

Steven J. Murdoch

Ransomware is in the news again, with Reckitt Benckiser reporting that disruption caused by the NotPetya ransomware could have cost them up to £100 million. In response to this news, just as every previous ransomware incident, the security industry started giving out advice – almost universally emphasising the importance of not opening phishing emails.

The problem is that this advice won’t work. Putting aside the fact that such advice is often so vague as to be impossible to put into action, the cause of recent ransomware outbreaks is not people opening phishing emails:

 

  • WannaCry, which notably caused severe disruption to the NHS, spread by automated scanning of computers vulnerable to an NSA-developed exploit. Although the starting point was initially assumed to be a phishing email, this was later debunked – only network scanning was used.
  • The Mole Ransomware attack that hit many organisations, including UCL, was initially thought to be spread by employees clicking on links in phishing emails. Subsequent analysis found this was incorrect and most likely the malware spread through malicious advertisements on legitimate websites.
  • NotPetya was initially thought to have been spread through Russian or Ukrainian phishing emails (explaining why that part of the world was so badly affected). It turned out to have not involved phishing at all, but the outbreak started through a tampered software update to the MEDoc tax accounting software mandated by the Ukranian government. Once inside an organisation, NotPetya then spread using the same exploit as WannaCry or by compromising administrative credentials.

Here are three major incidents, making international news, and the standard advice to “be vigilant” when opening emails or clicking links would have been useless. Is it any surprise that security advice gets ignored?

Not only is common anti-phishing advice unhelpful but it shifts blame to individuals (who are not in a position to prevent or mitigate most attacks) away from the IT industry and staff (who are). It also misleads management into thinking that they can “blame-and-train” their employees rather than investing in well engineered preventative security mechanisms and IT systems that can recover from compromise.

And there are things that can be done which have been shown to be effective, not just against the current outbreaks but many in the past and likely future. WannaCry would have been prevented by applying software updates, but the NotPetya outbreak was caused by a software update. The industry needs to act promptly to ensure that software updates are safe and reliable before customers become even more wary about installing them.

The spread of WannaCry and NotPetya within companies could have been prevented or slowed through better operational practices such as segmenting networks and limiting the use of administrative privilege. We’ve known this approach to be effective, but better tools and practices are needed to avoid enhanced security mechanisms being a drag on an organisation’s productivity.

Mole could have been prevented by ad-blocking browser extensions. The advertising industry is in open war against ad-blocking because it harms their income stream, but while they keep on spreading malware through their networks I have limited sympathy.

Well maintained and protected backups are essential to allow recovery, whether from ransomware, purely destructive attacks, or hardware failure. The security techniques above are effective, but these measures will not prevent every attack so mechanisms are needed to efficiently deal with the aftermath.

Most importantly we need move away from security being a set of traditions passed from generation to generation with little or no reason to believe they are effective (so called “best practice”) to well engineered systems following rigorous, evidence-based guidance on state of the art cybersecurity principles, standards and practices.

 

This article by Steven J. Murdoch also appears on Bentham’s Gaze, the blog of the UCL Information Security group.

The other human dimension

Madeline Carr

Madeline Carr

Siraj Shaikh

Siraj Shaikh

In their talk for the June 2017 RISCS meeting, In Madeline Carr (Cardiff University) and Siraj Shaikh (Coventry University) outlined a new project funded under the human dimensions call. Beginning June 1, 2017, the project studies the “other human dimension” – that is, not end users, but the policy makers who must assess and make decisions about evidence. Shaikh is a professor in system security; Carr is a reader in international relations who looks specifically at questions of cyber security, technology transfer, and emerging technologies from a global perspective. The project has hired a law postdoc who specialises in legal frameworks around transnational crime and a specialist in discourse analysis.

Assessing evidence poses a unique set of technical behaviour and policy challenges. The environment is fast-moving and changing constantly. The ability for a state to respond effectively is fundamental to its national security. The evidence itself can be contradictory, biased, and even politicised in cases where cyber security firms align with specific governmental and national interests. The complex matrix of interests and agendas may disrupt the clarity policy makers want. Accordingly, Carr and Shaikh shifted their focus to this other human dimension, the UK’s cyber security policy makers and advisors. This is a small and disparate group of people with varying levels of technical expertise whose responsibility goes beyond their own organisations.

There is, Carr and Shaikh observed, a distinct lack of research to support this community, despite the importance of the task they’ve been assigned. This was a gap that was noted in the 2016 national cyber security strategy.

They began therefore with the following research question: How effective are the judgements this group makes after a cyber event when it has to use available evidence to evaluate threats, risks, mitigation, and consequences? To answer it, Carr and Shaikh set out three objectives:

  • Evaluate what exactly constitutes the evidence presented to and accessed by policy makers, how they privilege and order that evidence and what the quality of that evidence is;
  • Identify the particular challenges of decision-making in this context and evaluate how effectively policy makers make use of evidence for forming advice;
  • Develop a framework for assessing the capacity of evidence-based cyber security policy-making that can be used to make recommendations for improvement and that can be applied to other public, private, and international cohorts.

The project will comprise three work packages. Based on discussions with the project’s partners, GCHQ and the Foreign and Commonwealth Office, the first work package will begin with a mapping exercise to understand the landscape of cyber security policy makers and how they share and source evidence. In addition, the first work package will assess evidence through interviews at all levels of government, a survey, analysis, and develop a criteria-based framework. The second work package will create, conduct, and report on a policy crisis game, This is a technique that has been used widely for understanding decision-making in a crisis; Carr and Shaikh will adapt it for evaluating evidence. The games’ scenarios will be based on events that have actually happened, but the evidence will be fabricated. The third and final work package will provide analysis and recommendations, including criteria for how policy makers should better engage with evidence.

In terms of impact, the key aim is to support the UK policy community and help them understand what their weaknesses and unconscious biases are. The researchers believe the results could have potential for extension into the privacy sector via the implementation of the Network and Information Systems Directive and the General Data Protection Regulation. It could also have a capacity-building role for use by foreign governments that are also struggling to engage with evidence and make decisions.

A number of questions arose. One raised the issue of policy that’s set by legal judgements, especially those emerging from areas that talk of safety or other things rather than cyber security. Another asked how the project would evaluate the “goodness” of a decision, given the many examples of areas where good decisions nonetheless cause bad results. A third asked about the validity of the intelligence that drives much of cyber security. A fourth asked about the many times that policy-making is reactive. Currently, for example, there is a lot of focus on ransomware, but not on the underpinning issues that need to be addressed. Finally, a questioner asked whether the adversarial nature implicit in “cyber security” set a particular outcome.

In response, Carr and Shaikh said that the project’s rather narrow focus means that legal judgements are largely out of scope unless they are raised during the interviews. The project does not aim to evaluate the decisions so much as whether policy makers can discern the difference between authoritative and poor-quality evidence, what kinds of evidence are useful, and what helps them decide. Intelligence-based threat reports are one type of evidence; however, policy makers need to be critical of all evidence and understand its source and the information it’s drawing on. The project is keen to bring in proactive evidence, and believes that policy games will prove a good tool for developing capacity. Finally, the project specifically looks at the people in the British civil service who are responsible for making decisions in response to some kind of threat, which is a subset of the people engaged with other types of security.

The long tail of cyber security

Part of the mandate for RISCS in its second phase is to broaden its focus from large enterprises to include SMEs, both as research subjects and as community participants. RISCS has some prior experience to draw on, as two of the first-phase projects sought to form partnerships with SMEs. This posting discusses the difficulties these collaborations exposed with a view to finding a way forward.

Simon Parkin

UCL researcher Simon Parkin

Productive Security, led by RISCS director, UCL professor Angela Sasse, studied how to make security policies work with, instead of against, users. As part of the project, researcher Simon Parkin led an effort to understand the security problems of both commercial and charitable SMEs. Choice Architectures looked into using “nudges” to encourage better decision-making; Lynne Coventry, the director of the Psychology and Communication Technology (PaCT) lab at Northumbria University, led an effort to work with SMEs. Both researchers found mismatches between the needs of SMEs and the needs of academic researchers.

Small-to-medium-sized enterprises (SMEs) pose several particular challenges for cyber security: their large numbers add up to a significant part of the internet infrastructure; they tend to lack the specialist resources that enable large enterprises to protect themselves; and there is little consistent research to draw on. The numbers are compelling: the government’s 2017 Cyber Security Breaches Survey found that overall 46% of Britain’s businesses identified a breach or attack in 2016, the likelihood grew with business size, and medium-sized firms (66%) were nearly as frequent targets as large ones (68%). However, when translated into raw numbers those percentages are more alarming: according to the Department of Business and Skills there are nearly five times as many UK companies with 50 to 249 employees as there are companies with more than 250 employees.

There are pragmatic reasons for focusing on SMEs to improve security across the board. As of early 2016, government figures show that small businesses make up 99.3% of Britain’s 5.5 million private sector businesses, and SMEs make up 99.9%. SMEs account for 60% of the country’s private sector employment and account for 47% of the private sector’s turnover. In aggregate, therefore, this “long tail” of businesses is economically highly significant. They are also highly significant in ensuring cyber security: today’s networked supply chains mean that a single small supplier can provide the ingress for attackers seeking to penetrate much larger enterprises. An example of this was the 2015 breach of the US retailer Target, which cost the company $39 million in victim compensation, caused an approximately 40% drop in its profits that quarter, and forced the CEO to resign; the attackers first broke in via a much smaller refrigeration, heating, and airconditioning subcontractor.

One difficulty in studying SMEs is the size of the category: millions of “SMEs” is as varied a demographic as “people over 55”. As defined in the UK, “SMEs” includes everything from sole traders to mid-sized organisations with 250 employees. It covers organisations with degrees of maturity ranging from early-stage start-ups struggling to afford an Ikea door to use as a desk to established companies with £1 million-plus annual turnover, and from family-owned local businesses to a 200-person growing enterprise. And it includes charities, which display some distinctive features.

Both charities and their similarly-sized commercial fellows have full-time and part-time staff, but, as outside researcher Emma Osborn (Oxford) has also found in studying the barriers SMEs face in implementing cyber security, small-to-medium-sized charities also have volunteers, and are much more closely regulated. Unlike their commercial counterparts, charities may have access to discounted business productivity software and IT support. Instead, a point Osborn also supports, smaller businesses may rely on software and services similar to those intended for home users. Anecdotal evidence suggests that around the 200-employee mark these companies start to look like larger corporations, but they still aren’t just small versions of large organisations.

The upshot, as Parkin and fellow researcher Andrew Fielder found in collaborating with an experienced outsourced IT services provider, is that the SMEs’ IT systems are equally diverse. Their project therefore sought to draw out a series of archetypes that could be used to make the scale and complexity of SMEs tractable. In some cases, someone might be running their business network connection through a phone plan, removing a whole layer of threats. Alternatively, they may rely on old IT they can’t update, which also affects security. A multi-site chain of restaurants will look quite different. The key actors inside these organisations may or may not include a dedicated IT person. In the smaller organisations, often the CEO takes broad responsibilities, including for IT; in other cases security is not kept separate but rolled into other compliance areas, such as data protection.

SME archetypes

Collated SME “archectypes from Parkin’s research

This level of variation across SMEs adds to the challenge for researchers by making it difficult to generalise from any particular engagement – or set of engagements – to draw out patterns and lessons that carry across this diverse landscape.

Still, there are many reasons why researchers want to work with SMEs. It’s an under-researched area. As the numbers show, SMEs are often targets for criminals. They’re a good testbed for driving innovation. Working with them helps create an evidence base that can lead to the adoption of best practices rather than succumbing to the latest marketing fad. Finally, the results of such collaborations can have a real impact.

The bigger question is why SMEs would want to work with researchers and how to make that research an experience that benefits both sides. SMEs have little time and resources to devote to research that doesn’t directly benefit their bottom line. Where consultants say they can offer definite solutions to SMEs’ problems, researchers say openly that they don’t know the answers; SMEs hoping that researchers will provide quick solutions based on the data they collect are likely to be frustrated. The diversity of participating businesses also means that although researchers can analyse the data they collect and draw conclusions, these conclusions may not be applicable for a different set of SMEs. RISCS hopes to compensate for this difficulty by including IT providers for SMEs on its practitioners panel.

The mismatch between business and academic cycles is a particular problem because it takes time to develop trusted relationships. A new PhD student can’t just be slotted into the place of the last one. Plus, time itself moves differently in businesses versus academia. Researchers may need flexibility in scheduling interviews with SME staff, if those staff are willing to participate but can’t afford to take time away from their everyday tasks to do so. For this reason, RISCS researchers have found that engaging in a meaningful way and understanding the drivers for their security-related decisions requires them to keep interviews short and explore security from the participants’ perspective.

In the longer term, the hope is that research within SMEs, perhaps via the consultants and specialists who provide IT services to them, will lead to solutions that will improve their ability to defend themselves. In the meantime, research is still working on the first step of identifying and understanding the challenges SMEs face in managing security. To date, every study has examined a different set/community of small organisations or involves just a small set of participants or organisations. The small sample sizes and the already-discussed diversity of SMEs make it difficult to translate findings across studies toward a unified understanding that subsequent research can build on or extend. The diversity of RISCS researchers and the community’s experience to date put us in a good position to pursue further work in this area.

RISCS is eager to engage with SMEs, their representatives, and those who support them. SME IT providers may like to participate in our Practitioners Panel. If you are interested in getting involved as an SME, please email us at sme@riscs.org.uk.

Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

countries affected by wannacry

Countries initially affected by WannaCry. From Wikimedia Commons (user:Roke).

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?
.
Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed.. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

This article also appears on the Bentham’s Gaze blog.