In the final session of the February 2018 workshop, researchers from ongoing developer-centred security (DCS) projects presented brief introductions to their work, followed by a general discussion. The other sessions from this workshop are: introduction and summary, developer’s den, and reverse panel discussion.
Tamara Lopez: Motivating Jenny
- What motivates developers to adopt security practices and technologies?
- How do we develop and sustain a culture of security?
The project, which has engaged with practitioners from the beginning, has begun by characterising developers to understand what motivates and saps energy, their values and needs, their talents, and the stage of their careers. One of the group’s first ethnographic studies looks at how they talk to each other about security in posts on Stack Overflow. Using the archived top 20 questions of all time, three dimensions of talk have emerged: security advice, values and attitudes, and community involvement. Developer comments such as that it’s “inconvenient” to use a particular function reflect what we hear at these meetings. When a respondent tells them to just do it, are they prioritising one non-functional requirement over another? They may be saying write good (secure) code, don’t worry about the user experience. The project is seeing some of the trade-offs between users and developers that RISCS talks about.
The Stack Overflow work will give them a sense of what things to listen for in the site visits they are planning next.
Dirk van der Linden: Johnny
Presenting for the Johnny project, Dirk van der Linden sought to add complexity by adding the dimension of human behaviour instead of just studying interventions, because interventions alter behaviour in a feedback loop. In addition, the project has chosen not to focus solely on professional developers (as the Jenny project does) but recognise that software development is being done by the masses – everyone and no one. Before we can ask what motivates or influences them we need to know who they are. The project is accordingly trying to understand the diversity of developers. What groups cluster together? How do the people fit together? We can’t assume it’s the language they speak, their educational level, or what they’re working on. Understanding this complexity is essential before we can ask what intervention works on which people. A big part of the project is trying to understand that diversity, focusing on the range from individuals to small-to-medium sized organisations.
Some discussion followed about developers’ motivations. Large organisations have management structures designed to implement interventions in ways that are known to work. The smaller the organisation, the harder it becomes to figure out how to affect someone’s behaviour. Writing quality code can be a motivator, but an indirect one, as in the open source community where people use their code as a portfolio they can build up to get a job. The project goes beyond motivation to consider how people write code, structure processes, and use tools to avoid mistakes.
Sascha Fahl: Simplicity Trumps Security
Sascha Fahl’s latest project, with Yasemin Acar and Marten Oltrogge, looks at the impact of “citizen developers” on software security. A relatively new phenomenon, citizen developers are users who create new business applications for use by others instead of professional developers using development and runtime environments sanctioned by corporate IT. Often they use code generators. There are a lot of these; they are easy to use; and laypeople can create apps using a mouse and drag-and-drop. However, the development process is a black box, and it’s hard to know what’s happening inside.
Fahl set out to ask two research questions:
- Are generated apps widely used?
- What is their impact on code security?
There are two types of generators: those you download and run on your own machine, and those you use online. Fahl began by building analysis tools to identify these generated apps. Running these tools across 2.2 million Android apps, he found 250,000 generated apps with 1.1 billion collective installs. All of these came from a list of approximately 25 online generators. Analysis of the generated apps found they did duplicate known issues, but also added new problems, such as using a single signing key to sign up to 30,000 apps. Put simply, he was seeing automated lack of security.
The result of this work was a paper that will be presented at IEEE Symposium on Security and Privacy in May 2018.
Respondents commented that this was an important piece of work that has highlighted a scaled security issue. Future work will look at who is using these generators, information that will be needed to answer the question of how to make them more secure but still easy to use. The researchers are also planning coordinated disclosure.
Charles Weir: Majid Research Project – Helping Developers Improve their Security
Charles Weir (Lancaster) presented ongoing work with a team including Ingolf Becker and Angela Sasse on the Majid Research Project. Prior work had identified a range of inexpensive interventions. Now, expert consultants have set out to deploy these in teams in three widely different companies and record the process. Using dual coding of the transcribed sessions, the researchers have started to identify what makes the best improvements. They are in the early stages of analysis.
One small result lies in what developers have said they’ve found encouraging. First is the idea that security features could be a sales pitch, which happens more than we generally think. A second is gamification, for example, providing lists of red crosses that turn to green ticks. The results also suggest that better support helps developers feel their efforts are more worthwhile.
Questions focused on the role of community in gamification. For example, a green check at Stack Overflow means other users have upvoted a posting or solution. Another question was whether the researchers have considered whether any positive effects they find might be short term and die off over time. The project had only three months, which isn’t a long enough period of time to test that; the tail-off effect takes more like a year. Addressing that can be an aim for further work.
Manuel Maarek: Impact of gamification on developer-centred security
Manuel Maarek (Heriot-Watt), Sandy Louchart (Glasgow School of Art), Léon McGregor (HW) and Ross McMenemy (GSoA) are studying the impact of gamification on developers using coding-based games, competitions, interactions for education, and securing coding games. The main research question: does gamification have a greater impact on security or non-security tasks? The researchers’ hypothesis is that it does, based on the impression that adversarial discussions are easy for security, a trigger that can be activated by putting the task inside a game.
Each participant is given six programming tasks; three have a security focus and three do not. They work in two settings, performing these tasks as part of a sequence of online programming exercises or as part of an online game with programming exercises. As control groups, the security tasks were chosen to partially replicate Sascha Fahl’s and Yasemin Acar’s SOUPS 2017 paper, and the effect of gamification was studied by comparing security and non-security tasks.
Laura Kocksch: Big data security, a clash of philosophies
Laura Kocksch, a social scientist who spoke at the 2016 RISCS developers workshop about her ethnographic study of training a software development team inside an organisation, discussed her work on big data security. For it, she spent eight months in companies looking at data innovation versus security.
Koksch observes a clash of philosophies. She does not see security as just a technical definition that can be inculcated into developers; many do already think about security and have their own picture of what it is.
Her previous research showed that there may be good organisational reasons for bad security. Extra work solving security problems rarely gets developers any credit and they often struggle to keep up with this extra workload. They liked their training, which stuck with them for six or seven months after the training ended, but still needed organisational support. If, for example, security doesn’t count as a feature request it does not form part of their accountable work practices. Longstanding industry sectors such as energy and insurance are starting to take on big data, and are accordingly making changes in their data infrastructure, which might offer a good opportunity for security by design.
For this project, she spent eight months inside companies studying the tension between data innovation and security, where the company sees big data as a big opportunity but believes that security always holds it back. She found a complete clash of cultures, which raised questions of how to translate between them to enable cross-disciplinary understanding. Social science has long-standing ideas of how this might work.
Koksch has a number of questions. Where and when does security come up? When in the processes is it important to the developers to talk about security? What does it mean to them? These things need to be defined. Certain concepts that focus on establishing symmetrical discussions might apply, such as boundary objects and trading zones.
A questioner asked if the concept of “security as a service” would be helpful, so that the security team would take the humble approach of offering help to developers rather than saying no to things. Koksch agreed that the people she talked to call the security department “the department of No”, often for good reasons. Koksch is trying to open up the network of connections that are there, rather than trying to define big data. A key issue is whether the two different groups want to work together; Koksch has found that both groups feel there is a new infrastructure opportunity, but the problem remains that they have completely different goals.
General lightning round discussion
A number of points were raised in discussion of the lightning talks. Among them:
- How to nudge people to understand security, including developers and board members, who react to disasters like anyone else. Making people too fearful can be counterproductive in terms of motivating them. Boards are always being sold to, but they are interested in security, and a better approach may be to try to take that interest and build on it, nudging them to where they need to be. However, if the first approach is to say they’ve got it wrong and then bore them on the subject, they will turn off. Still, there is an exciting news story about security almost every day, and there are positive things that can be used as well as fear and shock. Can we use these stories to better effect?
- The difference between drivers for security and enablers for that security to happen.
- One thing missing from the workshop has been mental models, which are often the reason people make mistakes. This led to the question of what most engineers understand by “mental model”.
- As a corollary, research generally focuses on tools and actions, but can’t pinpoint where people are thinking wrong.
- Outsourcing adds numerous difficulties. Who is responsible for the security of the outsourced IT? Organisations may have all the same discussions, but with outsourcing they are no longer talking about in-house developers but remote people who may be behind a salesperson.
- Related to this are concerns about getting quality into the supply chain. Is this a separate issue for security, or when outsourcing should security be incorporated into the contract guidelines? Ultimately, organisations have to trust the people working for them.
- How do we get people to take an interest in the boring and trivial parts of security? The people who invite consultants are often more interested in the sexy and advanced problems, such as stolen data or the latest cryptography algorithm, even though there may be 50 other more important steps to take first.