A call for short research projects: Understanding, preventing and responding to cyber crime

The Home Office, in collaboration with
the Research Institute in Science of Cyber Security
is pleased to release

 A call for short research projects on

Understanding, preventing and responding to cyber crime

 

Closing Date: 26th June 2018

An invitation to apply for grant funding, from OSCT Research and Analysis, Home Office

Summary of requirement
The Research Institute in Science of Cyber Security (RISCS) is expanding its interdisciplinary research community to develop further collaboration between the social sciences and cyber security professions. This is being complemented by development of a new cyber crime-focused research programme, commissioned by the Home Office, via funding from the National Cyber Security Programme. The research programme will comprise both longer term, multi-year research projects and also shorter-term research. This grant call is for short research projects commencing in 2018/19 financial year and completing by end September 2019 at the latest.

In order to inform the grant calls, consultation activities were held with policymakers, law enforcement, academics and other stakeholders to discuss evidence gaps in the cyber crime field. This identified a range of key areas that need to be addressed to inform policy and operational priorities. The Home Office is now inviting proposals for short projects addressing these evidence gaps for funding in 2018/19 FY and into the first half of 2019/20 FY only. These themes include, but are not limited to:

  • Costs and consequences of cyber crime.
  • Cyber “Protect” – improving cyber security behaviours amongst the public and businesses.
  • Cyber “Prepare” – understanding more about victims of cyber crime; resilience; victim support and advice and how to improve reporting of cyber crimes.
  • Cyber “Prevent” – understanding offenders, pathways and offender interventions.
  • Cyber “Pursue” – disruption techniques and offender business models.
  • Future technological developments and policing of cyber crime.
  • International dimensions for cyber crime.

Projects may start as soon as possible in 2018/19 and can deliver anytime up to end September 2019 with a total budget of up to £400k available for multiple projects. This comprises up to £250k funding available during 2018/19 FY and up to £150k during 2019/20 FY. The number of projects funded depends upon the proposals received.We welcome proposals with collaborative, multi-disciplinary approaches, employing any appropriate and justified methodological techniques. Social sciences should form a major contribution to the project and the role it plays should be clear in the proposal.

Applicants are required to provide in their proposals:

  • An outline of the proposed approach and scope of the project.
  • The research outputs and how they will provide impact for policy-makers or operational stakeholders.
  • CVs for individuals who will be involved in the project, including any relevant background knowledge and expertise regarding the proposed area of work.

Full details can be found at Short Research Projects grant call May 2018 (opens PDF)

Developer-centred security: Workshop introduction and summary

The February 2018 workshop on developer-centred security (DCS) had four parts. After an introduction and background summary by NCSC’s engineering processes and assurance lead Helen L (below), a Developer’s Den set up a feedback loop between researchers and practitioners where comments from researchers and practitioners were sought on a series of short pitches describing new tools and techniques. Next, a reverse panel session asked senior representatives from government and industry to pose questions and articulate challenges for the RISCS DCS community to consider. Finally, in a series of lightning talks, researchers from the £1.5 million DCS portfolio that sits under RISCS presented updates on their work in progress.

The RISCS secure development subcommunity was created in 2017 to research ways to support developers write more secure code. To date, developers have been seen as a ‘weak link’ and imposing more and more security processes has not been effective. This research seeks to understand more deeply the behaviours and motivations of developers, and how this affects the security aspects of the code they write. This will help us to understand how they can be better supported. Once they have established a better understanding of the landscape, their goal is to use the results of this first phase of research to improve the products that support developers on offer in the marketplace, and use those results to create a positive feedback loop to inform further research and products in development. NCSC has already published secure development guidance with this work in mind. With advice from the developer community, NCSC intends to issue further guidance for those involved in supporting developers.

Introduction
For the workshop held on February 8, 2018, Helen L was seeking advice, guidance, and input from those supporting developers working in various sectors: startups, government, industry. Helen L described secure code development as a “leaky pipe”:

The leaky pipe of secure software development

Helen L’s leaky pipe of secure software development

As the image shows, there are many opportunities for stopping those leaks:

  • education to support developers to learn about security;
  • organisational culture, which is important to how developers feel about security and whether they are incentivised to implement it;
  • motivation;
  • behavior and habits;
  • support for developers.

Following on from the 2016 developers workshop, RISCS has projects addressing some of these aspects. Why Johnny Doesn’t Write Secure Code looks at how and why security vulnerabilities arise from developers’ mistakes and asks how to mitigate them. Motivating Jenny to Write Secure Software studies what motivates software developers to do secure coding, and how to improve their motivation, tools, and culture. For a small grant project in 2017 Charles Weir investigated intervention strategies. All of these are intended to take a new, more productive approach by identifying potential interventions and understand the daily pressures on developers and how they understand security, given that they are not experts in this area.

Helen L believes it’s important to take support to the developers rather than waiting for them to find published advice and guidance. The goal here, therefore, is to facilitate a feedback loop between research and industry, so the two can move forward collaboratively.

Developer-centred security: Lightning Talks

In the final session of the February 2018 workshop, researchers from ongoing developer-centred security (DCS) projects presented brief introductions to their work, followed by a general discussion. The other sessions from this workshop are: introduction and summary, developer’s den, and reverse panel discussion.

Tamara Lopez: Motivating Jenny

Tamara Lopez (Open University) introduced Motivating Jenny. The project asks two research questions:

  • What motivates developers to adopt security practices and technologies?
  • How do we develop and sustain a culture of security?

The project, which has engaged with practitioners from the beginning, has begun by characterising developers to understand what motivates and saps energy, their values and needs, their talents, and the stage of their careers. One of the group’s first ethnographic studies looks at how they talk to each other about security in posts on Stack Overflow. Using the archived top 20 questions of all time, three dimensions of talk have emerged: security advice, values and attitudes, and community involvement. Developer comments such as that it’s “inconvenient” to use a particular function reflect what we hear at these meetings. When a respondent tells them to just do it, are they prioritising one non-functional requirement over another? They may be saying write good (secure) code, don’t worry about the user experience. The project is seeing some of the trade-offs between users and developers that RISCS talks about.

The Stack Overflow work will give them a sense of what things to listen for in the site visits they are planning next.

Dirk van der Linden: Johnny
Presenting for the Johnny project, Dirk van der Linden sought to add complexity by adding the dimension of human behaviour instead of just studying interventions, because interventions alter behaviour in a feedback loop. In addition, the project has chosen not to focus solely on professional developers (as the Jenny project does) but recognise that software development is being done by the masses – everyone and no one. Before we can ask what motivates or influences them we need to know who they are. The project is accordingly trying to understand the diversity of developers. What groups cluster together? How do the people fit together? We can’t assume it’s the language they speak, their educational level, or what they’re working on. Understanding this complexity is essential before we can ask what intervention works on which people. A big part of the project is trying to understand that diversity, focusing on the range from individuals to small-to-medium sized organisations.

Some discussion followed about developers’ motivations. Large organisations have management structures designed to implement interventions in ways that are known to work. The smaller the organisation, the harder it becomes to figure out how to affect someone’s behaviour. Writing quality code can be a motivator, but an indirect one, as in the open source community where people use their code as a portfolio they can build up to get a job. The project goes beyond motivation to consider how people write code, structure processes, and use tools to avoid mistakes.

Sascha Fahl: Simplicity Trumps Security
Sascha Fahl’s latest project, with Yasemin Acar and Marten Oltrogge, looks at the impact of “citizen developers” on software security. A relatively new phenomenon, citizen developers are users who create new business applications for use by others instead of professional developers using development and runtime environments sanctioned by corporate IT. Often they use code generators. There are a lot of these; they are easy to use; and laypeople can create apps using a mouse and drag-and-drop. However, the development process is a black box, and it’s hard to know what’s happening inside.
Fahl set out to ask two research questions:

  • Are generated apps widely used?
  • What is their impact on code security?

There are two types of generators: those you download and run on your own machine, and those you use online. Fahl began by building analysis tools to identify these generated apps. Running these tools across 2.2 million Android apps, he found 250,000 generated apps with 1.1 billion collective installs. All of these came from a list of approximately 25 online generators. Analysis of the generated apps found they did duplicate known issues, but also added new problems, such as using a single signing key to sign up to 30,000 apps. Put simply, he was seeing automated lack of security.
The result of this work was a paper that will be presented at IEEE Symposium on Security and Privacy in May 2018.
Respondents commented that this was an important piece of work that has highlighted a scaled security issue. Future work will look at who is using these generators, information that will be needed to answer the question of how to make them more secure but still easy to use. The researchers are also planning coordinated disclosure.

Charles Weir: Majid Research Project – Helping Developers Improve their Security
Charles Weir (Lancaster) presented ongoing work with a team including Ingolf Becker and Angela Sasse on the Majid Research Project. Prior work had identified a range of inexpensive interventions. Now, expert consultants have set out to deploy these in teams in three widely different companies and record the process. Using dual coding of the transcribed sessions, the researchers have started to identify what makes the best improvements. They are in the early stages of analysis.

One small result lies in what developers have said they’ve found encouraging. First is the idea that security features could be a sales pitch, which happens more than we generally think. A second is gamification, for example, providing lists of red crosses that turn to green ticks. The results also suggest that better support helps developers feel their efforts are more worthwhile.
Questions focused on the role of community in gamification. For example, a green check at Stack Overflow means other users have upvoted a posting or solution. Another question was whether the researchers have considered whether any positive effects they find might be short term and die off over time. The project had only three months, which isn’t a long enough period of time to test that; the tail-off effect takes more like a year. Addressing that can be an aim for further work.

Manuel Maarek: Impact of gamification on developer-centred security
Manuel Maarek (Heriot-Watt), Sandy Louchart (Glasgow School of Art), Léon McGregor (HW) and Ross McMenemy (GSoA) are studying the impact of gamification on developers using coding-based games, competitions, interactions for education, and securing coding games. The main research question: does gamification have a greater impact on security or non-security tasks? The researchers’ hypothesis is that it does, based on the impression that adversarial discussions are easy for security, a trigger that can be activated by putting the task inside a game.

Each participant is given six programming tasks; three have a security focus and three do not. They work in two settings, performing these tasks as part of a sequence of online programming exercises or as part of an online game with programming exercises. As control groups, the security tasks were chosen to partially replicate Sascha Fahl’s and Yasemin Acar’s SOUPS 2017 paper, and the effect of gamification was studied by comparing security and non-security tasks.

Laura Kocksch: Big data security, a clash of philosophies
Laura Kocksch, a social scientist who spoke at the 2016 RISCS developers workshop about her ethnographic study of training a software development team inside an organisation, discussed her work on big data security. For it, she spent eight months in companies looking at data innovation versus security.
Koksch observes a clash of philosophies. She does not see security as just a technical definition that can be inculcated into developers; many do already think about security and have their own picture of what it is.

Her previous research showed that there may be good organisational reasons for bad security. Extra work solving security problems rarely gets developers any credit and they often struggle to keep up with this extra workload. They liked their training, which stuck with them for six or seven months after the training ended, but still needed organisational support. If, for example, security doesn’t count as a feature request it does not form part of their accountable work practices. Longstanding industry sectors such as energy and insurance are starting to take on big data, and are accordingly making changes in their data infrastructure, which might offer a good opportunity for security by design.
For this project, she spent eight months inside companies studying the tension between data innovation and security, where the company sees big data as a big opportunity but believes that security always holds it back. She found a complete clash of cultures, which raised questions of how to translate between them to enable cross-disciplinary understanding. Social science has long-standing ideas of how this might work.

Koksch has a number of questions. Where and when does security come up? When in the processes is it important to the developers to talk about security? What does it mean to them? These things need to be defined. Certain concepts that focus on establishing symmetrical discussions might apply, such as boundary objects and trading zones.

A questioner asked if the concept of “security as a service” would be helpful, so that the security team would take the humble approach of offering help to developers rather than saying no to things. Koksch agreed that the people she talked to call the security department “the department of No”, often for good reasons. Koksch is trying to open up the network of connections that are there, rather than trying to define big data. A key issue is whether the two different groups want to work together; Koksch has found that both groups feel there is a new infrastructure opportunity, but the problem remains that they have completely different goals.

General lightning round discussion
A number of points were raised in discussion of the lightning talks. Among them:

  • How to nudge people to understand security, including developers and board members, who react to disasters like anyone else. Making people too fearful can be counterproductive in terms of motivating them. Boards are always being sold to, but they are interested in security, and a better approach may be to try to take that interest and build on it, nudging them to where they need to be. However, if the first approach is to say they’ve got it wrong and then bore them on the subject, they will turn off. Still, there is an exciting news story about security almost every day, and there are positive things that can be used as well as fear and shock. Can we use these stories to better effect?
  • The difference between drivers for security and enablers for that security to happen.
  • One thing missing from the workshop has been mental models, which are often the reason people make mistakes. This led to the question of what most engineers understand by “mental model”.
  • As a corollary, research generally focuses on tools and actions, but can’t pinpoint where people are thinking wrong.
  • Outsourcing adds numerous difficulties. Who is responsible for the security of the outsourced IT? Organisations may have all the same discussions, but with outsourcing they are no longer talking about in-house developers but remote people who may be behind a salesperson.
  • Related to this are concerns about getting quality into the supply chain. Is this a separate issue for security, or when outsourcing should security be incorporated into the contract guidelines? Ultimately, organisations have to trust the people working for them.
  • How do we get people to take an interest in the boring and trivial parts of security? The people who invite consultants are often more interested in the sexy and advanced problems, such as stolen data or the latest cryptography algorithm, even though there may be 50 other more important steps to take first.

 

 

Developer-centred security: Reverse Panel

The third session of the February 2018 developer-centred security (DCS) workshop was a reverse panel discussion in which panellists challenged the audience with provocations. The other sessions from this workshop are: introduction and summary, developer’s den, and lightning talks.

For the reverse panel session speakers were invited to pose questions and provocations about pain points and problems to the practitioners and researchers, who were invited to comment and propose answers.

Ed Tucker is CIO at Data Protection Governance and the former head of cyber for Her Majesty’s Revenue and Customs. He asked how to prove a positive impact when the industry is full of technological idiots and always trying to prove a negative – that is, prove that the latest purchase of magic beans is the reason nothing happened.

One response was to use threat intelligence and information sharing; if you know what attacks are hitting organisations like yours and you’re not getting hit that says something about the effectiveness of your security. The difficulty is linking correlation to causality. Another suggestion was to measure the density of vulnerabilities; if that density is reduced following a change in mentality of the security posture, that might represent a positive result. Similarly, doing something different and comparing it to others may provide an indicator, although there may be too many variables in environments you don’t control.

Martin Sadler is the former director of Hewlett-Packard’s Bristol research lab and a visiting professor at Royal Holloway. He asked: If no developer on a team can explain how Spectre works, would you feel comfortable flying, driving, or banking with systems controlled by software they had written? Sadler went on to explain that this is a challenge as the sophistication of attacks continues to rise.

In the past – such as the Row Hammer memory leak discovered a few years ago – vendors could fix it, but Spectre changes the game. The future will bring an increasing number of side channel attacks, and it will be ambiguous whether those can be dealt with at the developer or vendor level, both of whom can dismiss them as somebody else’s problem. We already see this in the games industry with video cards, where everyone blames someone else when a card or driver update breaks something and ultimately the vendor decides the group of affected people is too small to care about. In those cases, the problem never gets fixed and the games company’s senior management has to abandon it. This may become the attitude of most senior managers: they will say that security is too hard, it’s someone else’s problem, and we’ll wait for someone else to fix it. Is that what will happen? And how much of a model of how a computer works does a developer need? Materials scientists and physicists don’t base their work on a high school understanding of their fields, but that’s the level of understanding many developers have. Pushing the problem to someone else is probably the only way in a world in which every Internet of Things device has a scare story attached to it and, in the case of autonomous vehicles and medical devices, when people’s lives are at stake.

One suggested option is better security architectures that isolate problems when they arise. There is also the psychological question of how many times something has to crash before people will stop using them.
One commenter noted that we rarely join all the parts of a system together to see how they interact. Yet they didn’t think they’d ever seen a cyber breach that was due to just one thing being wrong; a compendium of issues leads to a breach. Yes, the proximate cause may be an unpatched system, but then: why was it unpatched? No one person can hold all the necessary information in their head because the space is too big now. Therefore, we need to learn how to communicate and collaborate better, and also how to take a holistic view.
Attacks on the supply chain have been rising, and Spectre is a supply chain issue. NCSC will shortly release a new guidance on the supply chain.

Troy Hunt is a Microsoft Regional Director and founder of the website Have I Been Pwned. He asked how to articulate the value of security to stakeholders who have to make decisions about whether to invest in it. Someone starting a new business, for example, may say that security feels like the right thing to do, but can’t easily quantify the value they get in return for the money spent on it. Maybe they won’t get hacked sometime in the future, or the impact of the attack won’t be as bad.

Citing the cost of fixing Wannacry as an example, a commenter noted that much of the time the costs of fixing are low compared to how much consultants cost. However, protection is a public good; you are not only protecting yourself, and as a result we’re all dependent on others to investing in security for the greater good, not just their own direct benefit. The cumulative effect of their investment helps to protect the UK software ecosystem, which does ultimately benefit them. However, it’s a difficult economic argument to articulate. Large companies can get certified as Cyber Essentials bodies, which lets them help smaller companies in their supply chain to pass.

One commenter was told by a client that when they were considering moving to MessageLabs they had effectively spent a year measuring how much they spent on spam issues. Another suggested that the best way is to identify stories in similar companies and look at the costs of the consequences such as the board resigning, a dropping share price, and so on. Uncovering what losses or outcomes matter to a company is important, as is articulating cyber risks in their own language. In an organisation where disruption to their central service because of a cyber incident is unacceptable – for example, an airline’s planes being grounded – talking about loss of availability may carry greater weight. In these cases, connecting return on investment to these kinds of metrics may be the most effective argument. The RISCS research call “Supporting the Board: Managing Cyber Risk” will hopefully start us along the path to answering some of these questions.

Ollie Whitehouse is the chief technology officer at NCC Group. Whitehouse cited several examples of successful efforts to reduce the friction between security and development teams: the models in Netflix director of engineering Jason Chan’s talk on splitting the cost of security and compliance, Jearvon Dharrie’s 2017 presentation Automating Cloud Security and Incident Response (DevSecOps), and Armon Dadgar’s Policy as Code. The approaches these talks describe ensured that developers knew what tools would be installed for them for each process, and were given back a sense of ownership. In another example, Terraforma embeds policy in code, which Whitehouse sees as the beginnings of recodifying security initiatives in a machine-readable, testable way. So, he asked: what else could we be doing? How could we take these ideas further?
In response, commenters noted that these initiatives are focused on delivering new things, which are the easiest to secure. Many of our problems derive from legacy infrastructure and the unanswered question of how to secure it. Another said he has typically enjoyed the process of peer reviewing of policy, in part because it means people have to read the documentation.

Steve Marshall is Head of Technical Architecture at the Ministry of Justice. His main question: how do we get the developers we’re talking about to know and care that we exist? A lot of the issues he’s seeing are very basic – SQL injection, for example – and many developers don’t care that they need to care about them. What should we be doing that we’re not?

The first response noted that a lot of the day’s discussion has been about incentivisation, and has included several relevant ideas: games, using teams to share knowledge and an understanding who gets hurt. Marshall is not convinced we can even get in the door in a lot of organisations. The RISCS subcommunity on talking to the board is one way, but a top-down approach. In a bottom-up approach, what are the routes of effective communication to raise awareness about security amongst developers?

Organisational culture is a crucial element; highlighting, recognising, and promoting those who solve serious security issues is a possible way of creating engagement. However, often security ends up being about just putting fires out, trying to do everything under this kind of pressure leads to selling security in the wrong way. Even a company like Yahoo!, which was very thorough about security as recently as 2007, changed when the organisation stopped incentivising it. How can we reward those who care about security effectively? How can everyone be brought in?

A complication is that the broader industry development community is organised around languages and frameworks.
In addition, there are many open questions. Someone sitting in a company and can’t learn from the top-ten list of vulnerabilities because they can’t fix it at their end point. How do we hire security people at scale? Or developers with a security education at scale? What is the minimum education to have? In one university a cyber security course started with cryptography – but there was no connection between that and the code the students wrote and what they learned couldn’t be transferred. Distilling theoretical knowledge so it can be taught to others to use on the job is a problem.

Daryl is the deputy director of research and innovation for GCHQ. He asked what our motivations are and how our investment matches those motivations. We are in danger of trying to push a rock uphill, because there are lots of things developers care about: they like to produce stuff that works, that is functional and efficient, and in a lot of cases security is at best a ride-along. Either we must make some trades so security stands up alongside or sell security along those other desirable non-functionals in order to optimise the whole – for example, develop a tool that enhances productivity and efficiency as well as improving security. How do we do that?

Commenters suggested that the mentality of business efficiency is essential; without it, we will always be pushing the rock. Another stressed organisational culture: if project managers and product owners aren’t trained to care about security there’s only so much a team of knowledgeable and motivated developers can do. Management and leadership need to make clearer where they are willing to take risks; product managers and senior management make decisions about risks every day even though we talk as if these problems were unique to security. The gap lies in translating security knowledge into risks that make sense at that higher level. However, not all developers are solely focused on getting things out the door as fast as possible. The security industry is the last area that has not become user-centric or customer-centric. Some policies are just not fit for purpose and don’t match how the business operates, and there is a massive disjunct between theory and practice. The worst question you can ask a security person is: why? Very few can explain why their policy is right, plus what you do may be affected by problems on the periphery. We need self-sufficient teams that can think about these things and understand the practical risk implications.

Wrap-up
Helen L felt the following main points had emerged:

  • Everything needs to be contextual and involve users
  • Translation is important; security uses a lexicon not many understand
  • As the field gets bigger, the number of people learning on the job to communicate to becomes unmanageable
  • There is no single point of responsibility and everything is someone else’s problem
  • How we come together to solve security problems is powerful
  • There are no ready answers
  • The reverse panel was interesting, but the takeaway was that it was difficult to answer the questions and we have quite a way to go still.

 

Developer-centred security: Developers Den

The second session of the February 2018 workshop was a Developer’s Den, in which three people developing security tools or services to support developers presented their efforts to date in search of constructive criticism. The other sessions from this workshop are: introduction and summary, reverse panel discussion, and lightning talks.

This part of the day was designed to stimulate a feedback loop, and begin to pull together a toolbox of techniques and products to support developers that have been reviewed by the DCS research community. Four tools and services were presented:

The panellists included researchers Helen Sharp (Open University), Awais Rashid (Bristol), Yasemin Acar (Leibniz Uni Hannover), Sascha Fahl (Leibniz Uni Hannover), and Manuel Maarek (Heriot-Watt), and developers DXW founder Harry Metcalfe and Michael Brunton-Spall

Secure Code Warrior
Secure Code Warrior co-founder John Fitzgerald has spent 14 years working in cyber security, including a significant period of time with SANS. SCW focuses exclusively on two areas: network security and people. Around 2010, he saw people becoming the key attack vector. In 2012, after of extending the SANS platform into different verticals, one of which was developers, he began asking why we keep doing the same thing over and over again and whether we’re solving the right problem if, as ten years of reports have consistently found, 35% of breaches are caused by software vulnerabilities.

Quality assurance provided a role model: 20 years ago, when it became unaffordable to fix functional problems after release, people set up quality assurance processes, and now developers write unit tests before they tackle the program code, so they know what to test against. Fitzgerald asked why we don’t do this in security? That became the genesis of SCW.

Fitzgerald wants to make developers the first line of security. It is, however, challenging. In Meet the Modern Learner, Deloitte found that a typical employee is allowed to spend 1% of their time on training and development – that is, 24 minutes in a 40-hour week. Security is seen as a problem for the security team. Anything invested in developers is seen as something that has to be repaid in productivity. The goal with Secure Code Warrior was to change the whole game by turning developers into security superheroes and avoid, rather than discover, vulnerabilities. Fitzgerald reasons that this approach also ought to translate into savings, as the cost of fixing security bugs rises the later they are found in the development process.

SCW is a hands-on learning platform designed to be competitive, engaging, and language-specific. It includes a number of tools: tournaments, training, learning, a coaching plugin, metrics, and assessments. The coaching plug-in, known as Sensei, sits in the integrated development environment and acts like a spell-checker for security while developers are coding, checked against the organisation’s own guidelines and the top 20 list of vulnerabilities. With security costs escalating, Fitzgerald believes it’s important to stop writing insecure code, rather than adding cost by finding the vulnerabilities that result from that underlying problem.

The panel raised a number of issues: the user interface; the importance of incorporating community; time pressures; how to integrate this tool into cultures that share knowledge and support informally, rather than going through formalised training; the auto-detection feature’s error rates and the dangers of over-relying on its accuracy; and developers’ general dislike of training, no matter how “engaging”. In agile development the cost of fixing bugs no longer rises over the course of development unless you have to restart from the beginning. Finally, often the problem is not the code developers write but the libraries they use, as Matthew Green discusses in Developers Are Not the Enemy.

UK Hydrographic Office: security champions
Neville Brown runs the software engineering team at the UK Hydrographic Office, part of the Ministry of Defence that provides sea charts and marine geospatial information. UKHO’s biggest customer is the Royal Navy, but it is also a trading fund. Of its 850 staff, 50 are software engineers.
Improving security, which UKHO decided this year it needs to do, poses several challenges. The office has a large variety of teams, technologies, and security contexts. On the defence side, UKHO can afford to slow things down as much as needed to ensure things are really secure, but the commercial side is subject to the usual market time pressures. Therefore, no one size fits all the office’s circumstances. Brown can’t impose standards or practices – either these slow down the commercial side or make the defence side insecure. In addition, today’s agile development process means there is no obvious time for experts to give advice and few documents to review, and teams self-organise. Therefore, expertise needs to be situated within each team, tailored to each team, compatible with agile development, and rooted in the way developers work.
UKHO is therefore in the process of setting up security champions, giving some developers extra skills and responsibility to coach and influence the team to adapt their processes – “like scrum masters in security”. The champions themselves also work together as a virtual team.

So far, Brown has learned that there are plenty of people interested with a mix of motivations: career development and advancement, money, even the desire to create secure code. A few seem more interested in hacking, which has warned Brown to ensure to plan for that. For their planned April launch they need help – speakers, games, and activities; they are struggling to find a canonical set of training and body of knowledge.

Panellists suggested that while the dual roles make sense, a difficulty may be ensuring that the security champions have enough time to do all of both parts of their job plus maintain the security skills they’re championing. It’s a common problem that these roles are set up with enthusiasm but over time the champions find themselves charged with additional responsibilities without the accommodation they need. In addition, some of the important aspects of this job are simply boring – an example given was SQL injection, which is a big problem, but a dull one. In addition, motivation can be a problem; will money and time be allocated for teams to implement the recommended security improvements? In this sense, the state of security today is similar to the state of user experience design 20 years ago; champions were one of the ways that was solved, so some of the other methods they used might also be applicable here. A final issue is that the champion model works well when the champions are right, but when they’re wrong people will still follow them – a problem that may increase over time as the champions have less and less time to improve and update their own skills. For that reason, developing communities within organisations that can improve security may be a better approach than focusing on a few charismatic individuals.

In work-in-progress studying a variety of interventions, Charles Weir, Ingolf Becker, and Angela Sasse are finding an early clear result that the team doesn’t necessarily learn from the champions. Champions may work if you already have someone who knows what they’re talking about, but there isn’t yet a good way of training up those advocates and keeping them motivated and rewarded. The researchers are working on ways to encourage and train advocates who can go from team and team spreading the word and solving many problems instead of just one. Champions may be best viewed as one tool among many; the greater mission is to make every software developer slightly better at security. Weir gave more detail about this project in the following session of lightning talks.

ThoughtWorks: Sensible Conversations
The idea behind Sensible Conversations, said Jim Gumbley, principal security engineer at Thoughtworks, is to build capability. Traditional approaches like threat modelling and risk assessment are genuinely hard and often leave developers confused. Gumbley’s goal for the last year has therefore been to make threat modelling simple. Hence Sensible Conversations, which gathers a cross-functional group to share their understanding of what needs to be protected and why; what the real threats are; and where there might be technical exposure. Then the group can prioritise the most valuable next steps.
Getting the scope right is crucial to ensuring that the discussions stay on track. As facilitators to identify scope, ThoughtWorks uses a component architecture diagram and “asset” cards. There are also threat cards – for example, “A random botnet or script kiddie mounts an automated attack on the system”; a GDPR card is a recent addition. Using these as cues, the team explores the likelihood and impact of threats and prioritises them for further discussion. The group then splits into groups and uses exposure cards and checklists derived from OWASP and NCSC guidance to assess the organisation’s exposure and what its workforce needs to improve. The groups then reconvene, report their findings, and agree three next steps.

To date, every such exercise ThoughtWorks has run has produced valuable security next steps and has proved a good way to connect security and delivery teams.

ThoughtWorks is still refining and simplifying this approach, and is looking for feedback. It intends to use a train the trainer model to create more facilitators and open-source the materials.

A panellist asked how this approach would help security problems become visible to, for example, an agile team. Gumbley noted that sometimes teams don’t know what a particular threat – cross-site scripting, for example – is, and their next step may be to go learn about it. Or the groups may simply say that whenever they look at a story concerning the database they have to take threats like SQL injection into account.

Another panellist asked about measuring the resulting improvements. ThoughtWorks has found some patterns across the 20 of these sessions they’ve run across a variety of industries. People consistently ask for copies of the cards or help writing stories. In one case, a ThoughtWorks discussion got a project cancelled because a P&L manager discovered that two security groups were each spending a couple of hundred thousand pounds trying to fix the same problem. In every case, they’ve found the effort produces awareness, insight, and action. Getting people to participate has not been difficult; they’re usually asked to do so by the project manager.

One question was how to ensure that the discussions are not building false confidence, as the Dreyfus model of learning shows that people consistently overestimate the amount they’ve learned in formal training. ThoughtWorks, however, isn’t trying to teach security but to raise awareness.

Choosing winners
The panellists were asked which of these efforts they would invest in. All three met general approval. However, Sensible Conversations got the most votes, followed by Secure Code Warrior. As Helen L noted, a lot depends on what kind of return an investor is looking for. Sensible Conversations is the most holistic and might be the best choice for organisations struggling with a security flaw in their business processes instead of a vulnerability in newly-written software. Secure Code Warrior is the most overtly investor-friendly because it offers a tool that can be sold.