Blog

Developer Centred Security Workshop

Helen L, from the new National Cyber Security Centre, laid out a series of questions for the day to be collaboratively discussed over whiteboards placed around the room in order to understand the challenges software developers face that result in insecure products and services.

NCSC, created in October 2016, brings together several previous groups – CESG, CERT-UK, and CPNI – into a single organisation part based in Cheltenham and part near London’s Victoria station. Helen works in the Socio-technical Security Group (StSG), which was set up in April 2015 to consolidate several teams: the Engineering Processes & Assurance team (which Helen leads); the Risk Management team (that John Y leads); and the People-Centred Security team (which Emma W leads). Joining these groups together, Helen said, enables them to tackle complex topics like cyber security in a better way.

Developer-centred security was a hot-potato idea for which no one had responsibility, even though many people saw it as an important issue. The formation of the socio-technical security group puts them in a better position to work on this problem.

Crucially, the role of the human in cyber security systems is becoming recognised: technology by itself is not enough. Much research has focused on end users, but there are many other types of user: developers, sysadmins, and others who are also users and part of a large system and need to be thought about. The NCSC group today brings together members from many different disciplines to tackle this complex problem of developer-centred security: social science, computer science, natural science. For developers, what may be most important is not secure code: functionality, up-time, maintainability, and usability may all be seen as more important. Security is the bottom of that stack, is often traded off against those other needs.

Helen L highlighted some issues by asking: what if someone’s life depends on secure code? The obvious example is today’s pacemakers – implanted cardiac defibrillators (ICDs) – which are connected to the internet to enable them to pass data to the web portal the doctor uses to check up on each patient. At the recent O’Reilly security conference in Amsterdam, Helen heard¬†a Norwegian woman who went on to research the code behind her implanted ICD after a bug in the software caused a brief collapse.

Among the things she found:

  • Her ICD had two wireless connections, one short-range to the home monitoring unit and the other from that unit to the web portal;
  • Very little security testing had been done on the implant, all of it theoretical;
  • A bug in the ICD software meant that settings on the device differed from the ones technicians and doctors could see on screen, which took a long time to figure out and had a direct impact on her well-being;
  • Her brief collapse while climbing the Underground stairs at Covent Garden, was because of a default setting error. The software was coded on the assumption that the device’s ultimate user would be 80 years old, with a much lower maximum heart rate than 35-year-old Moe’s. In turn, that meant the device abruptly cut her heart rate from 160 beats per minute to 80 and the ambulance had to be called;
  • ?To date, there is no hard evidence (despite the plot in the TV series Homeland) that these devices can be hacked remotely, though short-range hacks are proven.

Many of today’s common problems have long been solved. SQL injections, for example, are a straightforward attack, known since 1998, with tools available to fix, but are still exploited and can still have high impact. Heartbleed was a buffer over-read. A bug in a code library used by telecommunications products puts mobile phones and networks at risk of takeover. Tesla has had to fix bugs in its radar systems. The Ashley Madison hack captured 11 million passwords. All are examples of coding errors.

To a security expert, the questions are obvious: why don’t developers use protective measures? Why can’t developers get it right? They should know better. How do we close the gap? Is software the weakest link? Are developers lazy or unmotivated? As Matthew Smith and Matthew Green have asked, is the developer the enemy?

In fact, there are many factors in these large systems that could go wrong. Helen called the problem a “minefield to navigate”, one that requires a wide range of skills and is becoming more complex over time. Software is just one piece that can be vulnerable to attack. The most secure software will be worked around if it’s not usable. Products need to be secure however they’re going to be used, but there is very little advice or guidance available for developers to create this usability in their coding.

The day’s discussion, therefore, is intended to map out the landscape of these problems and find evidence of what developers actually experience, in order to be able to design appropriate interventions. Based on today’s’ discussions, Helen’s group hopes to issue a call early in 2017.

 

This talk/discussion was part of a RISCS/NCSC workshop on securing software development in November 2016. The day’s discussions led directly to the research call that funded Why Johnny Doesn’t Write Secure Software and Motivating¬†Jenny to Write Secure Software, among others.

About Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries

One comment on “Developer Centred Security Workshop

  1. Pingback: Developer-centred security: Workshop introduction and summary | RISCS

Comments are closed.