Awais Rashid 

The aim of the three-year EPSRC-funded Why Johnny Doesn’t Write Secure Software project, which began in April 2017, Awais Rashid (Lancaster University) explained to the June 2017 RISCS meeting, is to develop an empirically grounded theory of secure software development by the masses. The project’s collaborators include others at Lancaster University: Charles Weir, John Towse, and newcomer Dirk van Linden. From elsewhere, it includes Pauline Anthonysamy (Google Switzerland); Bashar Nuseibeh, Marian Petre, and Thein Tun (Open University); Mark Levine (Exeter); Mira Mezini (ITU Darmstadt), Elisa Bertino (Purdue); Brian Fitzgerald (Lero); Jim Herbsleb (Carnegie Mellon); Shinichi Honiden (National Institute of Informatics, Japan). This project has close links to the complementary Motivating Jenny to Write Secure Software project.

The last decade has seen a massive democratisation of how software is developed. In the early days of the software industry, a would-be programmer would pursue a university degree, learn software development, and then work in a software house. With recent developments such as the Arduino, the Raspberry Pi, mobile phone apps, and the Internet of Things, virtually anyone may become a developer writing software that is then deployed to people around the world. “Johnny” may be working in a software house or may equally be working in their own time from their living room on software that comes into contact with myriad other systems around the world on a regular basis. How does that person think about security? What decisions do they make, and what drives them? This project will study a range of software in apps and devices that captures the range of “Johnnies” actually engaged in writing software in today’s world.

The project seeks to answer three main questions:

  • What typical classes of security vulnerabilities arise from developers’ mistakes?
  • Why do these mistakes occur? Are the APIs so complicated to use that they produce mistakes, as suggested by recent work from Darmstadt. Are there other factors, such as their own misconceptions about security and how the software they write is supposed to handle it?
  • How may we mitigate these issues and promote secure behaviours?

The project’s first objective is to characterise developers’ approach to producing secure software by examining the artefacts produced and eliciting the developers’ awareness, attitudes, and assumptions about security. Do they think it’s someone else’s job? Do they care about security? Rashid suspects the project team will find a range of responses: some will care, some won’t; some will fail because the tools they are given make it hard to do secure programming. All of this will make it possible to determine how developers’ assumptions, behaviours, and awareness relate to the mistakes that appear in their software.

Graph of human behavior, security, and interventions

A schematic rendering of three degrees of secure software development: developers’ personal characteristics; those characteristics’ associated vulnerabilities in software; and the degrees of intervention to mitigate against them.

Next, the project will investigate the factors that affect developers’ security behaviours. The researchers seek to understand not only what their security design strategies are, but also to mitigate their biases and accommodate constraints such as pressure to meet market deadlines. Many apps have very short lifetimes; these are constraints that need to be understood. Based on this work, the project hopes to develop and evaluate a range of cost-effective interventions for steering developers away from poor security design decisions, taking into account both the kinds of vulnerabilities to be avoided and the types of behaviour to be discouraged.

Earlier work studying developers’ approach to error detection and recovery by Tamara Lopez and Marian Petre (Open University) ethnographic analysis of how developers work found three main stages of error detection and recovery. First: detect that something has gone wrong. Second: identify what’s wrong. Third: Undo the effects. In this context, errors can be beneficial because they show something has gone wrong.

With James Noble (Victoria University), Weir and Rashid have carried out complementary work to understand how developers learn about security and what encourages good security behaviour. This research found a pattern in the many interviews conducted with experienced people in industrial secure software development: challenges to what developers do encouraged them to engage with security. These challenges come from many directions: automated testing tools; pentesters and security experts; product managers; feedback from end users; the team’s own brainstorming sessions; and discussions with other development teams. All of these help developers think more about security and how to embed it in software.

The project hopes to build on this prior work as well as a small grant recently completed by Weir studying effective ways to intervene. Developers, Rashid concluded, do need our help. The project is eager to engage with others, receive critical feedback, and populate the space. Those interested can contact the project at contact@writingsecuresoftware.org.

The short discussion that followed raised the issue of sampling bias and how to engage people who are completely uninterested in security, an issue the project team has debated and understands depends on sampling correctly. The design of the libraries developers use is often unhelpfully (and accidentally) complex; the project hopes to understand developers’ strategies. Standard ways of programming might encourage or discourage good practice in this area. Cryptographic libraries and APIs are in particular not easy to use. The practices of open source developers, who have relationships within and across teams, might lend themselves to specific kinds of software features, though this also leads to the question of how group identity influences specific behaviours. Finally, the possibility of regulation was raised, but this appears impractical when all sorts of people are developing software all over the world. Future projections include the idea of programmable cities, an enterprise far too large to be handled by a few people in a few organisations. Giving developers a set of requirements that are too complicated to complete won’t help the way they work.


Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries

9 Comments

Helen Sharp: Motivating Jenny to write secure software | RISCS · 14/08/2017 at 14:31

[…] as different development methods. There are close links between this project and the complementary Why Johnny Doesn’t Write Secure Software project, particularly in terms of the researchers involved, but the two were developed separately. […]

Sascha Fahl: The impact of code sources on cyber security | RISCS · 29/09/2017 at 09:03

[…] to open up a new area of research for RISCS that includes new projects intended to identify the problems developers have in trying to write secure code; motivating them to do better; and identifying helpful […]

Developer Centred Security Workshop | RISCS · 25/01/2018 at 16:58

[…] in November 2016. The day’s discussions led directly to the research call that funded Why Johnny Doesn’t Write Secure Software and Motivating Jenny to Write Secure Software, among […]

Doing IT Security | RISCS · 09/02/2018 at 13:20

[…] in November 2016. The day’s discussions led directly to the research call that funded Why Johnny Doesn’t Write Secure Software and Motivating Jenny to Write Secure Software, among […]

Watching how developers write secure code | RISCS · 14/02/2018 at 09:40

[…] in November 2016. The day’s discussions led directly to the research call that funded Why Johnny Doesn’t Write Secure Software and Motivating Jenny to Write Secure Software, among […]

Yasemin Acar: Conducting impactful security research | RISCS · 14/03/2018 at 14:22

[…] and developers are highly results-driven. One problem that was raised – a key tenet of the Johnny project – is that developers are increasingly not the homoegeneous community they were; Android apps […]

Developer-centred security: lightning talks | RISCS · 11/04/2018 at 08:36

[…] van der Linden: Johnny Presenting for the Johnny project, Dirk van der Linden sought to add complexity by adding the dimension of human behaviour […]

Developer-centred security: Workshop introduction and summary | RISCS · 11/04/2018 at 08:36

[…] on from the 2016 developers workshop, RISCS has projects addressing some of these aspects. Why Johnny Doesn’t Write Secure Code looks at how and why security vulnerabilities arise from developers’ mistakes and asks how to […]

Angela Sasse: RISCS2 one year on – RISCS · 10/08/2018 at 13:55

[…] Why Johnny doesn’t write secure software […]

Comments are closed.