Blog

How does security become routine? An ethnographic study in a software company

Laura Kocksch is a social anthropologist at Ruhr University, Bochum. Kocksch works with computer scientists, security experts, and developers to research organisational challenges in IT security. Her talk discussed implementing IT security as a practical challenge through two ethnographic studies conducted while she was at the Fraunhofer Institute for Secure Information Technology (Fraunhofer SIT).

The first began with the question, Can security become an organisational routine? The researchers set out to study how to facilitate change and maintain it talking to developers in a real-world organisation. Security can be facilitated in a number of ways: by creating a law, particularly for critical infrastructure; or via a crisis, such as the one surrounding Volkswagen’s emissions control systems. What Kocksch sought to establish, however, was how security can be facilitated in an ongoing process or running system.

The computer scientists in the project were interested in tool adoption and the fact that there is very little empirical evidence about secure software engineering within companies; most of what is known is purely anecdotal. So they wanted to know what happens when the topic of security enters a software company and what effect do security consultants have on organisational routines within a software development group. The social scientists were more interested in technology adoption in general and socio-technical situations: the way the social environment interacts with technological frameworks, as well as practical actions. Therefore, additional questions were: What practices are triggered by a security consultation? How does security consulting affect organisational routines in a software development group?

The researchers observed a penetration test (pentest) conducted by an external security consultant who had access to a running process and its code. The flaws found in the pentest were submitted to an internal tracking system and then the consultant conducted a three-day in-person training workshop, to which staff from across the organisation were invited as well as the researchers. At the workshop, the consultant conducted an in-depth presentation of vulnerability types, general awareness-raising, hands-on hacking exercises, and a hacking challenge. In the eight weeks after the workshop, 48 of the 53 security flaws that had been found were fixed. The workshop was widely appreciated, and the software developers were euphoric, but the question was: would this result in long-term change?

The researchers continued the study with 14 interviews with both developers and management and a questionnaire, and analysed the product group’s internal documents. They found that despite the developers’ eagerness to fix the issues after the workshop, it was still a one-time event, and developers wound up dissatisfied that they could not change how they were working.

The researchers found two important factors. First, the consultant could not facilitate long-term change. When discussing organisations, organisation science talks about two factors: ostensive, or structural, and performative or practice aspects of routines. In their 2003 paper Reconceptualizing Organizational Routines as a Source of Flexibility and Change, Martha Feldman and Brian T. Pentland define routines as repetitive actions that are accountable to others. The developers struggled with the interplay of those two factors. First, the structural aspects of the organisation did not support change in security practice. Second, the practices the developers had could not change the structure.

In interviews, some agreement emerged among the developers and managers: the team was comfortable with being self-organising around the issue of embedding security in new features. The developers were highly experienced, with five to over 30 years in the job, though they were less experienced in security. They did not want strict guidelines issued from the top telling them how to do their job. The researchers’ hypothesis was that in this group security was, like other -ilities (usability, maintainability, scalability, availability, extensibility, portability…), an aspect of quality. The difficulty was that quality has always been in the charge of technical experts; this is a problem when talking about security because it is not as visible, tangible, or accountable. Therefore, security did not translate into the goal of the company: it’s not a feature that a product manager can show off as a selling point, and if it’s not are the developers justified in spending time and energy on it? The upshot is that the agreement in place between developers and managers was a barrier to change.

In the second aspect, the researchers found that what motivated the developers was the enjoyment of putting things together and seeing them work. They take pride in what they do, but need incentives. Security apparently produced no feedback. Second, the workshop presented security as an individual task, while the developers worked collaboratively. The lessons were that security had to be more accountable so that developers could justify spending time on it to their superiors; and it had to be made more tangible to make it a goal in future development processes. In addition, security has to be made interesting; the company needed to make it something the developers could take pride in. Finally, the relationship between the developers and the security team needed to change: the developers saw the security practitioners as the source of strict guidelines they felt couldn’t match their practices.

In conversation with management and some developers afterwards, the company thought it would be easy: find a stakeholder that everyone could consult. Over time, however, this idea came to include the need for the stakeholder to respect the organisational framework. Kocksch suggests that one reason we hear so little about security issues in companies like Facebook and Google is that they have annual hacking challenges to make security tangible and fun; this product group began considering following suit.

The second ethnographic study asked, Can a system be planned to be secure? How do we do security-by-design? One possibility is using threat modelling techniques. The study took place in a German secure data centre. Under German law, the data is strictly protected and until now only accessible in person when the owner checks in with their ID; the proposal was to create remote access.

The system in place had a number of stakeholders: archive staff, who are experts in data protection law and security experts; and the IT staff, who provide the IT systems and were involved in creating the new remote access facility. They were asked what they thought the risks were via a simple mind map. The perceived threats varied vastly across the staff: the IT personnel had pretty good modelling techniques, whereas the archive staff didn’t know what to do.

The researchers found a chicken-and-egg situation with a lot of uncertainty. On the one hand, it was unclear to the IT staff what the security constraints were for the solution they were supposed to build; on the other hand, the archive staff needed to know what the IT system would look like in order to have some idea how to secure it.

In conclusion, “doing IT security” poses problems. It’s a challenge for organisational structures (which in turn pose problems for security); it is not a linear problem; is not just like any other “ility”; and it’s a sociotechnical challenge. Finally, security-by-design poses challenges for both developers and users. Agile development poses additional problems for security but also opens additional bottom-up possibilities.

A questioner raised the difficulty of telling a developer that the child they’ve just created is flawed, and noted that the “bystander effect” may mean that if no one is specifically accountable no one may take responsibility. Kocksch noted that the workshop inspired developers to try to teach others; the problem was the lack of organisational support.

A second questioner found the second study “depressing” and asked how to move forward. Kocksch suggested collaboration and open discussion of why decisions have been made rather than thinking about assets.

A third questioner asked what the barriers were to introducing methodology that would turn non-functional requirements like security into functional ones that could be implemented. The management in Kocksch’s study didn’t find functional requirements the right place to discuss security because there were many discussions of trade-offs in which developers were not involved.

This talk/discussion was part of a RISCS/NCSC workshop on securing software development in November 2016. The day’s discussions led directly to the research call that funded Why Johnny Doesn’t Write Secure Software and Motivating┬áJenny to Write Secure Software, among others.

About Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries