The third session of the February 2018 developer-centred security (DCS) workshop was a reverse panel discussion in which panellists challenged the audience with provocations. The other sessions from this workshop are: introduction and summary, developer’s den, and lightning talks.

For the reverse panel session speakers were invited to pose questions and provocations about pain points and problems to the practitioners and researchers, who were invited to comment and propose answers.

Ed Tucker is CIO at Data Protection Governance and the former head of cyber for Her Majesty’s Revenue and Customs. He asked how to prove a positive impact when the industry is full of technological idiots and always trying to prove a negative – that is, prove that the latest purchase of magic beans is the reason nothing happened.

One response was to use threat intelligence and information sharing; if you know what attacks are hitting organisations like yours and you’re not getting hit that says something about the effectiveness of your security. The difficulty is linking correlation to causality. Another suggestion was to measure the density of vulnerabilities; if that density is reduced following a change in mentality of the security posture, that might represent a positive result. Similarly, doing something different and comparing it to others may provide an indicator, although there may be too many variables in environments you don’t control.

Martin Sadler is the former director of Hewlett-Packard’s Bristol research lab and a visiting professor at Royal Holloway. He asked: If no developer on a team can explain how Spectre works, would you feel comfortable flying, driving, or banking with systems controlled by software they had written? Sadler went on to explain that this is a challenge as the sophistication of attacks continues to rise.

In the past – such as the Row Hammer memory leak discovered a few years ago – vendors could fix it, but Spectre changes the game. The future will bring an increasing number of side channel attacks, and it will be ambiguous whether those can be dealt with at the developer or vendor level, both of whom can dismiss them as somebody else’s problem. We already see this in the games industry with video cards, where everyone blames someone else when a card or driver update breaks something and ultimately the vendor decides the group of affected people is too small to care about. In those cases, the problem never gets fixed and the games company’s senior management has to abandon it. This may become the attitude of most senior managers: they will say that security is too hard, it’s someone else’s problem, and we’ll wait for someone else to fix it. Is that what will happen? And how much of a model of how a computer works does a developer need? Materials scientists and physicists don’t base their work on a high school understanding of their fields, but that’s the level of understanding many developers have. Pushing the problem to someone else is probably the only way in a world in which every Internet of Things device has a scare story attached to it and, in the case of autonomous vehicles and medical devices, when people’s lives are at stake.

One suggested option is better security architectures that isolate problems when they arise. There is also the psychological question of how many times something has to crash before people will stop using them.
One commenter noted that we rarely join all the parts of a system together to see how they interact. Yet they didn’t think they’d ever seen a cyber breach that was due to just one thing being wrong; a compendium of issues leads to a breach. Yes, the proximate cause may be an unpatched system, but then: why was it unpatched? No one person can hold all the necessary information in their head because the space is too big now. Therefore, we need to learn how to communicate and collaborate better, and also how to take a holistic view.
Attacks on the supply chain have been rising, and Spectre is a supply chain issue. NCSC will shortly release a new guidance on the supply chain.

Troy Hunt is a Microsoft Regional Director and founder of the website Have I Been Pwned. He asked how to articulate the value of security to stakeholders who have to make decisions about whether to invest in it. Someone starting a new business, for example, may say that security feels like the right thing to do, but can’t easily quantify the value they get in return for the money spent on it. Maybe they won’t get hacked sometime in the future, or the impact of the attack won’t be as bad.

Citing the cost of fixing Wannacry as an example, a commenter noted that much of the time the costs of fixing are low compared to how much consultants cost. However, protection is a public good; you are not only protecting yourself, and as a result we’re all dependent on others to investing in security for the greater good, not just their own direct benefit. The cumulative effect of their investment helps to protect the UK software ecosystem, which does ultimately benefit them. However, it’s a difficult economic argument to articulate. Large companies can get certified as Cyber Essentials bodies, which lets them help smaller companies in their supply chain to pass.

One commenter was told by a client that when they were considering moving to MessageLabs they had effectively spent a year measuring how much they spent on spam issues. Another suggested that the best way is to identify stories in similar companies and look at the costs of the consequences such as the board resigning, a dropping share price, and so on. Uncovering what losses or outcomes matter to a company is important, as is articulating cyber risks in their own language. In an organisation where disruption to their central service because of a cyber incident is unacceptable – for example, an airline’s planes being grounded – talking about loss of availability may carry greater weight. In these cases, connecting return on investment to these kinds of metrics may be the most effective argument. The RISCS research call “Supporting the Board: Managing Cyber Risk” will hopefully start us along the path to answering some of these questions.

Ollie Whitehouse is the chief technology officer at NCC Group. Whitehouse cited several examples of successful efforts to reduce the friction between security and development teams: the models in Netflix director of engineering Jason Chan’s talk on splitting the cost of security and compliance, Jearvon Dharrie’s 2017 presentation Automating Cloud Security and Incident Response (DevSecOps), and Armon Dadgar’s Policy as Code. The approaches these talks describe ensured that developers knew what tools would be installed for them for each process, and were given back a sense of ownership. In another example, Terraforma embeds policy in code, which Whitehouse sees as the beginnings of recodifying security initiatives in a machine-readable, testable way. So, he asked: what else could we be doing? How could we take these ideas further?
In response, commenters noted that these initiatives are focused on delivering new things, which are the easiest to secure. Many of our problems derive from legacy infrastructure and the unanswered question of how to secure it. Another said he has typically enjoyed the process of peer reviewing of policy, in part because it means people have to read the documentation.

Steve Marshall is Head of Technical Architecture at the Ministry of Justice. His main question: how do we get the developers we’re talking about to know and care that we exist? A lot of the issues he’s seeing are very basic – SQL injection, for example – and many developers don’t care that they need to care about them. What should we be doing that we’re not?

The first response noted that a lot of the day’s discussion has been about incentivisation, and has included several relevant ideas: games, using teams to share knowledge and an understanding who gets hurt. Marshall is not convinced we can even get in the door in a lot of organisations. The RISCS subcommunity on talking to the board is one way, but a top-down approach. In a bottom-up approach, what are the routes of effective communication to raise awareness about security amongst developers?

Organisational culture is a crucial element; highlighting, recognising, and promoting those who solve serious security issues is a possible way of creating engagement. However, often security ends up being about just putting fires out, trying to do everything under this kind of pressure leads to selling security in the wrong way. Even a company like Yahoo!, which was very thorough about security as recently as 2007, changed when the organisation stopped incentivising it. How can we reward those who care about security effectively? How can everyone be brought in?

A complication is that the broader industry development community is organised around languages and frameworks.
In addition, there are many open questions. Someone sitting in a company and can’t learn from the top-ten list of vulnerabilities because they can’t fix it at their end point. How do we hire security people at scale? Or developers with a security education at scale? What is the minimum education to have? In one university a cyber security course started with cryptography – but there was no connection between that and the code the students wrote and what they learned couldn’t be transferred. Distilling theoretical knowledge so it can be taught to others to use on the job is a problem.

Daryl is the deputy director of research and innovation for GCHQ. He asked what our motivations are and how our investment matches those motivations. We are in danger of trying to push a rock uphill, because there are lots of things developers care about: they like to produce stuff that works, that is functional and efficient, and in a lot of cases security is at best a ride-along. Either we must make some trades so security stands up alongside or sell security along those other desirable non-functionals in order to optimise the whole – for example, develop a tool that enhances productivity and efficiency as well as improving security. How do we do that?

Commenters suggested that the mentality of business efficiency is essential; without it, we will always be pushing the rock. Another stressed organisational culture: if project managers and product owners aren’t trained to care about security there’s only so much a team of knowledgeable and motivated developers can do. Management and leadership need to make clearer where they are willing to take risks; product managers and senior management make decisions about risks every day even though we talk as if these problems were unique to security. The gap lies in translating security knowledge into risks that make sense at that higher level. However, not all developers are solely focused on getting things out the door as fast as possible. The security industry is the last area that has not become user-centric or customer-centric. Some policies are just not fit for purpose and don’t match how the business operates, and there is a massive disjunct between theory and practice. The worst question you can ask a security person is: why? Very few can explain why their policy is right, plus what you do may be affected by problems on the periphery. We need self-sufficient teams that can think about these things and understand the practical risk implications.

Wrap-up
Helen L felt the following main points had emerged:

  • Everything needs to be contextual and involve users
  • Translation is important; security uses a lexicon not many understand
  • As the field gets bigger, the number of people learning on the job to communicate to becomes unmanageable
  • There is no single point of responsibility and everything is someone else’s problem
  • How we come together to solve security problems is powerful
  • There are no ready answers
  • The reverse panel was interesting, but the takeaway was that it was difficult to answer the questions and we have quite a way to go still.

 


Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries

1 Comment

Developer-centred security: lightning talks | RISCS · 11/04/2018 at 08:36

[…] In the final session of the February 2018 workshop, researchers from ongoing developer-centred security (DCS) projects presented brief introductions to their work, followed by a general discussion. The other sessions from this workshop are: introduction and summary, developer’s den, and reverse panel discussion. […]

Comments are closed.