Charles Weir

Charles Weir

In his presentation at the June 2017 RISCS meeting, Charles Weir, a researcher at the University of Lancaster, outlined his work with Awais Rashid (Lancaster) and James Noble (Victoria University) studying ways to intervene to provide software developers with security support. The project, which is based at the University of Lancaster, is in its second year.

Weir’s research question: How can you intervene from outside to change what a developer does?

To tackle this, Weir interviewed a number of experts who had performed such (presumably successful) interventions. Eight strategies came out of these interviews.

  • Almost all interviewees mentioned hosting incentivisation workshops in the early stages of their projects. The consensus on the best approach was to scare, rather than nudge, developers, but to be sure to provide solutions.
  • Threat modelling.
  • Choice of components. For example, one penetration tester said that when testing Ruby on Rails systems guessing which components had been used made it easy to identify the most likely weaknesses.
  • Developer training.
  • Automated code analysis.
  • Pen testing. Weir noted that fewer – 30% to 40% – of those interviewed mentioned this than the researchers had expected.
  • Manual code review.
  • Continuous reminders through a drip feed of jokes, competitions, and nudges to keep security issues in the front of developers’ minds.

From there, Weir set out to determine which of these interventions was most worth pursuing. Strategically, the best ones to pick are those that are cheapest and easiest – that is, that cost the least, and require the least effort and discipline from the developers themselves.

Five of these options qualify. What surprised the researchers is that three of these are predominantly social changes to developers’ methods of working rather than technical changes to the code they produce. The three are: developing a threat model, motivational workshops, and continuous reminders. The other two low-cost but effective interventions are automated code analysis and informed choice of components. Of these, only static analysis is purely technical – though even that option requires developers to take note of the results it produces. The researchers therefore recommend focusing on these five. A fuller report is available.

In response to questions, Weir noted that although it might be tempting to conclude that developers ignore manual code review on the basis that it hasn’t been useful, he has found that it’s more often the case that these reviews are hard personally for developers, and therefore tend to be avoided if possible.

Threat modelling is more effective when it focuses on assets rather than attacker models, which are difficult for developers to understand. Thus, developers can focus on things attackers might want to steal, things that need protection, and stepping stones to further attacks, such as login credentials and reputation. All of these recommendations are well-understood; what’s hard is getting development teams to pick them up.


Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries