Peter Davies: Forward security for emerging problems

Peter Davies

Peter Davies

“We are trying to solve a cyber security problem in a system that’s going away and ignoring the one that’s emerging. It’s a really big problem.” Practitioner panel member Peter Davies, director for security concepts “and head of cunning plans” for Thales e-Security, a company within Thales, could probably have a very successful second career doing stage shows scaring people though he says that’s never his intention. His job is not merely to protect his own company from attacks but to do the same for the many customer companies that depend on Thales products for their security and that of their millions of customers all over the world. “As a practitioner,” he says, “my job is trying to solve their problem going forward.”

What he’s getting at with that opening statement is that he sees a significant problem with security as we typically conceive it and researchers study it: too much of both theory and practice is looking backwards and solving the convenient or the irrelevent. We are always trying to solve the last problem or trying to defend against the last attack, but rarely looking ahead to coming risks.

Because of Thales’ particular position in the industry – its products are used to secure more than 54 governments, every major Western bank, every major IT supplier, and many other security suppliers and corporations – Davies is often holding the canary in the coal mine when new attacks are conceived and launched.

One of his favourite examples was his early discovery of the potential for individual targeting. In the case of supply chain companies, the attackers’ goal is often not to steal data, as it would be in most enterprises, but to infiltrate and use the company’s products as vectors for distributing targeted attacks based on autonomous bots. Many of these are designed to have their effect even behind air gaps. The 12 to 15 significant – that is, industrial-scale, custom-built – attacks Davies sees each year often create the effect of an insider attack by precisely targeting specific individuals based on their roles within companies. In one such attack a few years ago, the goal was to corrupt code destined for a particular customer. It used information the attackers had gathered on six company employees from four different networks in four different places, which included the targeted individuals’ home computers.

“When an attack is potentially business-terminating – not a matter of compliance but of survival – it seriously requires you to bring your A game,” says Davies. “When it’s simply a matter of compliance then security is generally done by a company, organisation, or department, but this had nothing to do with that. My attackers were breaching those boundaries that we’d conveniently put up.”

The first thing you notice about Davies is his elliptical way of speaking: he describes what he’s seen while leaving out all the names and labels. “It’s very difficult and not helpful to discuss some of the more specific things that are there,” he explains, “because the fact that you know these things becomes a vector of attack in itself. One of the things that makes you most exposed is what you know.” He often talks about knowing his attackers, and in some cases he means it literally. The serious attackers he talks about – organised like ordinary businesses – are often led and directed by social scientists, not techies.

That level of carefulness has been around him all his life: his father was Geoffrey Davies, one of the pioneers of miniature implantable pacemakers, and he grew up surrounded by safety-critical electronics. By the time Davies was 16, he knew the electrical routes through the human body that can be used to implant a pacemaker and was helping his father make circuit boards. At university in Wales he studied philosophy and logic – “I always picked things I couldn’t do”. Both proved good choices, as philosophy taught him to grasp and understand new concepts rapidly. Also good training was the state of the department at the time: the logic half was at war with the ethics half. Making the wrong argument in the wrong place to the wrong person could lead to failure. “It was very good training for working in business.” For his post-graduate degree, he did computer science and mathematical statistics, also in Wales. Oddly to modern ears, computer science was part of the English department, essentially because the professor there had come from GCHQ and Bletchley Park, where his work was applying computer science to language.

By the time he left in the 1980s, his choices were AI or security, and he picked security. “That was when people were talking about expert systems, and it seemed to me there wasn’t enough processing power to make them work. Or data.” In hindsight, he was correct. Most modern systems are based on ideas that were current then but that required time for Moore’s Law to do its work and provide the necessary processing power and the rise of companies like Google and Facebook to provide large enough databases. By contrast, security looked like a field where the complexity of attacks was increasing exponentially while defensive techniques were not. “According to Moore’s Law I’m on a linear improvement in my computing power. I will never catch up with the threat,” he says. “That means you have to be smart.” An important element of that is reducing the attack surface up front, “bringing you somewhat back into line with things you can do.”

Davies’ particular emphasis at the moment is on resilience. “You need to anticipate that it will go wrong and build so that it’s a manageable, rather than catastrophic, failure.” Some of today’s difficulty for enterprises in understanding their own systems, he argues, is directly traceable to the loss of the historical knowledge people had of how and when the system made mistakes. “The original knowledge of what they were trying to do is often not present any more,” he says.

This is the reason for his emphasis on thinking about resilience and planning how to recover when things go wrong. When asked what his hardest problem is, this is what he names: “Getting people to look at digital survivability and resilience as opposed to compliance.” It is, he says, hard enough to get them to do the things that NCSC talks about, but it’s much harder to get them to look ahead to understand where developments will take us over the next decade and what they need to do to prepare.

Even at RISCS meetings, Davies finds the cyber attacks he worries about are not well represented. “What I’m looking at is, if I’ve got personally identifiable information and a safety-critical system coexisting then they need starkly different security mechanisms. One may absolutely need to be encrypted, and one needs not to be so that I can monitor it. And where that data is 103 Exabytes of data a day sitting on the same processors in the same system at the same time, that’s an inherently contradictory set of things they’re doing. Whichever one I choose there’s going to be a hole I can’t magically get rid of that will result in opportunities for novel attacks that will succeed.” In line with that, “I’m particularly interested at the moment in the use of legitimate data in ways that the system can’t combat, resulting in unexpected and unanalysed system consequences – for example, data that won’t run in the accelerators of a real-time system.” Attacks of this kind, like every attack, have their antecedent analogues, but essentially involve finding ways to tie up a system’s resources so that it can’t function effectively. Such attacks are a frightening prospect because they don’t rely on errors or vulnerabilities but on using knowledge the attacker has gained about how a system works. “There’s no way to avoid this.” These are the more sophisticated attacks he finds coming his way and they are particularly aimed at the hyper-connected hybrid cyber-physical systems now being built.

Davies, therefore, takes the view that “I expect to operate in a failed system.” He already has some ideas of protective strategies. “One thing I know is you should be extremely careful when you invest in monocultures. I can find nowhere that a monoculture has survived, and when it fails it will almost certainly fail catastrophically.” In his view, it’s essential to consider carefully how you might introduce “randomisation” into a system to limit the extent and consequence of a compromise. Think 2 million autonomous cars, or railroads, or the electrical grid.

In addition, security can’t go on being a specialist subject in a silo. Where safety of life is involved, companies have to sign off that their product is safe. “You can’t say to the court, it’s safe as long as it isn’t a cyber attack.” Instead, it will become essential to be able to argue that it’s safe even after being attacked. “That’s worlds away from waving my hands and talking about how to get people to do passwords the right way.”

Also on his agenda is helping people understand that they may be collateral damage rather than the object of the attack, and that they don’t understand what a compromise of their machine learning systems might look like because they don’t understand what those systems are doing. “Those are the types of cyber attacks that I think are to do with the resilience of systems going forward, and you can’t just address them by coding correctly.” As he asked at a RISCS meeting in 2016 to illustrate the mismatch between today’s thinking about security and what will be needed tomorrow, “Will it be acceptable to a court that you drove into a wall simply because the instruction was correctly signed?”

So these, for Davies, are the emerging problems. Many security practitioners until now have been working in environments where “security” largely meant “compliance”. Look at cars: you think of the machine in your driveway as a “car” but in fact it’s a heterogeneous network that generates 103EB of data per day, according to IBM, and just happens to have wheels and a chassis.

“I think the cyber attacks we’re looking at now are just a moment of time. We’ve known about this stuff for 30 years, and we need to spend most of our efforts on leapfrogging it. Otherwise we’re in a situation where our improvement is linear but the attacks are escalating exponentially,” he says.

Paul Iganski: Ethical debates for practical application

Paul Iganski (Lancaster) is associated with the Centre for Research and Evidence on Security Threats (CREST), where he chairs the Security Research Ethics Committee

Paul Iganski

Paul Iganski

CREST is a multi-university national hub for research on security-related matters. It is commissioned by the ESRC with funds in part from the UK security and intelligence agencies.
In chairing the ethics committee, Iganski is quasi-independent; his job is to ensure that all research associated with the Centre follows good ethical practice. Any project funded by CREST or associated with it must go through the project’s home institution’s ethical procedures, and then CREST’s. On average, the CREST Security Research Ethics Committee receives six applications per month, and to date few applications have passed through the committee without queries and suggestions related to the ethical concerns affecting security research.

Iganski discussed three of the areas that have generated the most debate in the committee:

  • Ethical concerns regarding secondary analysis of research data; this might be a traditional small, closed data set from empirical research.
  • Concerns around confidential data about people, access to which the stewards or holders of that data might open up, such as police records, victim statements, suspect interviews, Crown Prosecution Service records (with prosecutors’ reflections), court records, and records from the probation services and others; not all of this is in the public domain and it represents a vast amount of records that criminological and other researchers access.
  • Open source big data, such as public online interactional data from social media, primarily Twitter, but also other sites like Facebook.

For the purposes of discussion, Iganski began by assuming that in each of the above scenarios the data providers had not given their specific informed consent for their data to be used in the projects. In such cases, an ethics committee serves as a proxy research participant and, on behalf of the original data provider, makes an informed decision about whether to participate in this new use of their data. There are also many studies where consent was never obtained for future reuse of data. Rather than take the absolutist stance that all such reuse should be barred and given that in this type of work the original respondents can’t be contacted to give their consent, an ethics committee again serves as a proxy and decides on a case-by-case basis.

Iganski outlined a hypothetical case in which the original participants were not asked and the applicants wanted to use the data. The first concern is the potential for harm if the individuals’ anonymity is betrayed, which goes far beyond a broken promise. So the committee asked questions: What did these individuals agree to participate in? Did they know that their data might be reused in future in security-funded projects? Because these projects may have included extremists and terrorists, the people concerned might still be subjects of interest to the authorities.

In such cases, the committee has felt it was incumbent upon them to do more than speculate, and where possible, to canvass views from people similar to the original participants. In one such case, those surveyed were unequivocally against reusing the data, even decades later. When reviewing applications, Iganski’s group makes clear that consent forms must explicitly state the source of funding for the research, as well as whether the researchers anticipate that the data will be reused in future, whether by themselves or others.

Given today’s widespread use of social media, there is also a very real danger that the subjects can be reidentified via sophisticated searches linking their profiles to verbatim quotes appearing in academic publications. Twitter’s terms and conditions allow quotation, both in academic papers and in the media, but they require that the full text of the tweets including the account holder’s handle must be published with no editing. For this reason, researchers concerned with hate speech have used published Twitter posts without obtaining informed consent from the account holders. Given the lack of privacy protection, these individuals can potentially be identifiable beyond the consent they intended to give, and publication may expose them to harm such as stigmatisation, ostracism, and physical harm.

Nonetheless, social media provides a valuable reservoir of unsolicited public opinion on discriminatory comments for those researching hate crimes and hate speech. In such research, an ethics committee serves as a proxy respondent for these account holders. If asked for their consent to publish their Tweets, many would likely refuse. In fact, a November 2017 study by Matthew L. Williams, Pete Burnap, and Luke Sloan (Cardiff), published in the British Journal of Sociology, reported that in an online survey of over 500 Twitter users, the vast majority (80%) said that they would expect to be asked for their consent before their Tweets were published in academic outputs. Also: an even bigger majority (over 90%) would want to remain anonymous.

Iganski noted that some academics might argue that those posting racist speech on social media deserve whatever they get if they know they are publishing it in a public forum. Iganski’s view, however, is that, just like offline, many incidents occur in the heat of the moment, often with alcohol involved, and people often later regret what they have said.

In the subsequent discussion, a commenter noted that journalistic and academic ethics diverge in this area. Journalists assume that a tweet is a public statement that can be used under fair use; tweets are commonly quoted without permission in highly public settings. Iganski responded that by contrast academics have a longer commitment to the principle of consent and the protections of anonymity that go with it. Others thought the method used was interesting, but wondered how the committee would have handled it if the people they consulted had been divided down the middle or the Twitter users had been mixed in their opinions. Iganski said the committee would have had to make a judgement call bearing that evidence in mind.

This talk was presented at the November 24, 2017 RISCS meeting, “Ethics, Cybersecurity, and Data Science, hosted by the Interdisciplinary Ethics Research Group at the University of Warwick.

How does security become routine? An ethnographic study in a software company

Laura Kocksch is a social anthropologist at Ruhr University, Bochum. Kocksch works with computer scientists, security experts, and developers to research organisational challenges in IT security. Her talk discussed implementing IT security as a practical challenge through two ethnographic studies conducted while she was at the Fraunhofer Institute for Secure Information Technology (Fraunhofer SIT).

The first began with the question, Can security become an organisational routine? The researchers set out to study how to facilitate change and maintain it talking to developers in a real-world organisation. Security can be facilitated in a number of ways: by creating a law, particularly for critical infrastructure; or via a crisis, such as the one surrounding Volkswagen’s emissions control systems. What Kocksch sought to establish, however, was how security can be facilitated in an ongoing process or running system.

The computer scientists in the project were interested in tool adoption and the fact that there is very little empirical evidence about secure software engineering within companies; most of what is known is purely anecdotal. So they wanted to know what happens when the topic of security enters a software company and what effect do security consultants have on organisational routines within a software development group. The social scientists were more interested in technology adoption in general and socio-technical situations: the way the social environment interacts with technological frameworks, as well as practical actions. Therefore, additional questions were: What practices are triggered by a security consultation? How does security consulting affect organisational routines in a software development group?

The researchers observed a penetration test (pentest) conducted by an external security consultant who had access to a running process and its code. The flaws found in the pentest were submitted to an internal tracking system and then the consultant conducted a three-day in-person training workshop, to which staff from across the organisation were invited as well as the researchers. At the workshop, the consultant conducted an in-depth presentation of vulnerability types, general awareness-raising, hands-on hacking exercises, and a hacking challenge. In the eight weeks after the workshop, 48 of the 53 security flaws that had been found were fixed. The workshop was widely appreciated, and the software developers were euphoric, but the question was: would this result in long-term change?

The researchers continued the study with 14 interviews with both developers and management and a questionnaire, and analysed the product group’s internal documents. They found that despite the developers’ eagerness to fix the issues after the workshop, it was still a one-time event, and developers wound up dissatisfied that they could not change how they were working.

The researchers found two important factors. First, the consultant could not facilitate long-term change. When discussing organisations, organisation science talks about two factors: ostensive, or structural, and performative or practice aspects of routines. In their 2003 paper Reconceptualizing Organizational Routines as a Source of Flexibility and Change, Martha Feldman and Brian T. Pentland define routines as repetitive actions that are accountable to others. The developers struggled with the interplay of those two factors. First, the structural aspects of the organisation did not support change in security practice. Second, the practices the developers had could not change the structure.

In interviews, some agreement emerged among the developers and managers: the team was comfortable with being self-organising around the issue of embedding security in new features. The developers were highly experienced, with five to over 30 years in the job, though they were less experienced in security. They did not want strict guidelines issued from the top telling them how to do their job. The researchers’ hypothesis was that in this group security was, like other -ilities (usability, maintainability, scalability, availability, extensibility, portability…), an aspect of quality. The difficulty was that quality has always been in the charge of technical experts; this is a problem when talking about security because it is not as visible, tangible, or accountable. Therefore, security did not translate into the goal of the company: it’s not a feature that a product manager can show off as a selling point, and if it’s not are the developers justified in spending time and energy on it? The upshot is that the agreement in place between developers and managers was a barrier to change.

In the second aspect, the researchers found that what motivated the developers was the enjoyment of putting things together and seeing them work. They take pride in what they do, but need incentives. Security apparently produced no feedback. Second, the workshop presented security as an individual task, while the developers worked collaboratively. The lessons were that security had to be more accountable so that developers could justify spending time on it to their superiors; and it had to be made more tangible to make it a goal in future development processes. In addition, security has to be made interesting; the company needed to make it something the developers could take pride in. Finally, the relationship between the developers and the security team needed to change: the developers saw the security practitioners as the source of strict guidelines they felt couldn’t match their practices.

In conversation with management and some developers afterwards, the company thought it would be easy: find a stakeholder that everyone could consult. Over time, however, this idea came to include the need for the stakeholder to respect the organisational framework. Kocksch suggests that one reason we hear so little about security issues in companies like Facebook and Google is that they have annual hacking challenges to make security tangible and fun; this product group began considering following suit.

The second ethnographic study asked, Can a system be planned to be secure? How do we do security-by-design? One possibility is using threat modelling techniques. The study took place in a German secure data centre. Under German law, the data is strictly protected and until now only accessible in person when the owner checks in with their ID; the proposal was to create remote access.

The system in place had a number of stakeholders: archive staff, who are experts in data protection law and security experts; and the IT staff, who provide the IT systems and were involved in creating the new remote access facility. They were asked what they thought the risks were via a simple mind map. The perceived threats varied vastly across the staff: the IT personnel had pretty good modelling techniques, whereas the archive staff didn’t know what to do.

The researchers found a chicken-and-egg situation with a lot of uncertainty. On the one hand, it was unclear to the IT staff what the security constraints were for the solution they were supposed to build; on the other hand, the archive staff needed to know what the IT system would look like in order to have some idea how to secure it.

In conclusion, “doing IT security” poses problems. It’s a challenge for organisational structures (which in turn pose problems for security); it is not a linear problem; is not just like any other “ility”; and it’s a sociotechnical challenge. Finally, security-by-design poses challenges for both developers and users. Agile development poses additional problems for security but also opens additional bottom-up possibilities.

A questioner raised the difficulty of telling a developer that the child they’ve just created is flawed, and noted that the “bystander effect” may mean that if no one is specifically accountable no one may take responsibility. Kocksch noted that the workshop inspired developers to try to teach others; the problem was the lack of organisational support.

A second questioner found the second study “depressing” and asked how to move forward. Kocksch suggested collaboration and open discussion of why decisions have been made rather than thinking about assets.

A third questioner asked what the barriers were to introducing methodology that would turn non-functional requirements like security into functional ones that could be implemented. The management in Kocksch’s study didn’t find functional requirements the right place to discuss security because there were many discussions of trade-offs in which developers were not involved.

This talk/discussion was part of a RISCS/NCSC workshop on securing software development in November 2016. The day’s discussions led directly to the research call that funded Why Johnny Doesn’t Write Secure Software and Motivating Jenny to Write Secure Software, among others.

Kat Hadjimatheou: Ethics and algorithmic decision-making

Kat Hadjimatheou

Kat Hadjimatheou (Warwick)

Kat Hadjimatheou, a researcher with the Interdisciplinary Ethics Research Group at the University of Warwick, discussed how her group applies Tom Sorell’s outline of ethics to emerging technologies such as profiling with big data, where there are both good and bad criticisms from the point of view of autonomy and fairness.

Many projects Hadjimatheou’s group participates in involve practitioners from various types of organisations – police, security, technology companies, local authorities. Often, these practitioners expect that if they are unlucky enough to have to work with ethicists they will always be hearing the word “no” from the “moral police”. Data scientists and start-ups are particularly frustrated by this when they’re excited about their latest discovery; the same is true of the police, who want to be able to do the same things criminals can do. Sometimes ethicists just say “yes”. More often, however, their response is to suggest thinking more deeply about the potential problems. By analysing these and working out which are the most compelling, they try to think of ways to address them in advance.

An important part of this work is recognising the roles and responsibilities of different actors in the digital sphere. For example, often criticisms of profiling don’t distinguish among those who are doing it. Yet this is important: a company has a different duty to clients than a police force does to members of the public in a democratic society. Hadjimatheou’s group tries to weigh the arguments and take into account who they’re talking to and what their competencies and roles are.

Profiling with big data is used in many ways. First, mining big data makes it possible to find useful patterns and correlations that are imperceptible by other means.

Second, profiling is used in risk assessments in applications such as insurance, criminal justice, and border control . Hadjimatheou offered three examples drawn from American Express, the Chicago police, and IBM. In the first of these, a businessman who had been an exceptionally good customer for many years found his American Express card credit limit massively reduced simply because he had shopped in a store in which people with poor credit ratings also shopped. This was only revealed to him by the credit card company after he applied a lot of pressure, including via the mainstream media.

The Chicago police use algorithms to identify individuals at high risk of being involved in gun crime, and use the results as a basis for attempting to intervene. It’s not clear if the results are given to police responding to emergency 911 calls, or what other agencies have access to the system. London’s Metropolitan Police has a similar effort, though one that’s less algorithmic and data-driven. In London, when the Metropolitan Police estimates that someone is highly probably a gang member, the suggestion is shared with housing, social services, and youth services, and these disclosures have affected people’s access to services, including housing.

Finally, IBM Enterprise Intelligence is advertised as being able to tap into many sources of data, including dark web data, to make risk assessments of individuals applying for asylum. The company doesn’t specify how its results may be used.

In order to serve ethical values such as autonomy and fairness, respect people’s capacity to choose how to live their lives, and grant them the opportunity to pursue their choices, all these decisions should be made for transparent reasons.

There have been a number of criticisms of each of these types of profiling. Some of these criticisms misfire, and miss what is really at stake. For example, the ACLU calls American Express’s use of behavioural scoring economic guilt by association. The Electronic Frontier Foundation objects on the basis that this “minority report” approach treats people as if they have already committed a crime. Hadjimatheou argues, however, that the “minority report” argument would only be valid if being on the list led to arrest and a presumption of guilt – because that is how people who have committed a crime are treated. Treating people as more likely to commit a crime is different, and the legitimacy of that treatment depends on the strength of the evidence on which it’s based.

Hadjimatheou’s suggested alternative approach is to look at the case through a contractualist lens to consider what complaint someone might make. The American Express decision left the man with no opportunity to adapt his behaviour to change his risk score, and the fact that the score was used at all, given his lengthy history with the company, confounded reasonable expectations. The decision fails at fairness; the questionable and obscure basis for the risk score offers no opportunity for challenge, and means that people are treated as poor credit prospects for potentially unsound reasons.

The “guilt by association” criticism levels the same objection at all three of the above cases, but fails to acknowledge that the practical implications differ between contexts. The disadvantage and unfairness inflicted as a result of profiling are worse in the Chicago case, for example, where they affect people’s access to basic goods such as housing, than in the American Express case, where the decision does not close off alternative channels for securing credit. As a general principle, however, these systems should give people opportunities to challenge the results and explain the basis of the scores.

In response, an attendee had realised while working in the data mining industry that they didn’t have a scientifically understandable model that they could use to defend themselves in court if someone dies because the underlying correlations only surface after the fact when questions are raised. In those cases, it’s never clear whether the reason given is believable. So the question is how to make these systems accountable if we can’t trust businesses to tell us the truth. Answering these questions is still in early stages. Hadjimatheou noted that police increasingly rely on private companies to monitor and analyse the dark web, some of which have a monopoly on historical data. As a result, some police are saying they’re not sure whether the evidence they have is reliable and will hold up in court. Hadjimatheou thought there might be an analogy to white-hat hackers testing cyber security that could be used as a model for testing algorithms. A commenter added that finding out about problems after the fact is “part of the process of getting algorithms to match our moral sensibilities, which are changing.”

Angela Sasse felt that it’s essential to be aware of and address some of the human fallibilities in the learning itself. Objections to algorithmic systems are often batted away by saying that humans are more prejudiced than algorithms, a claim that needs to be challenged. It’s essential to get to the point where we can ensure that there is sound evidence that a prediction is valid. Law enforcement, if asked what data is useful and how long it should be kept, feel morally justified in saying “everything” and “forever” because it might be useful. A respondent with knowledge of the intelligence services disputed this statement, saying that law enforcement obeys the law, which prohibits keeping data forever, and that everyone should be aware that these are issues that law enforcement discuss among themselves all the time. This led Tom Sorell to ask how ethicists can talk to people. The more generally ethicists speak, the less people understand them. The more they “go native” with the intelligence services and the police, the less they’re normative.

This talk was presented at the November 24, 2017 RISCS meeting, “Ethics, Cybersecurity, and Data Science, hosted by the Interdisciplinary Ethics Research Group at the University of Warwick.

Developer Centred Security Workshop

Helen L at the new National Cyber Security Centre, laid out a series of questions for the day to be collaboratively discussed over whiteboards placed around the room in order to understand the challenges software developers face that result in insecure products and services.

NCSC, created in October 2016, brings together several previous groups – CESG, CERT-UK, and CPNI – into a single organisation part based in Cheltenham and part near London’s Victoria station. Helen works in the Socio-technical Security Group (StSG), which was set up in April 2015 to consolidate several teams: the Engineering Processes & Assurance team (which Helen leads); the Risk Management team (that John Y leads); and the People-Centred Security team (which Emma W leads). Joining these groups together, Helen said, enables them to tackle complex topics like cyber security in a better way.

Developer-centred security was a hot-potato idea for which no one had responsibility, even though many people saw it as an important issue. The formation of the socio-technical security group puts them in a better position to work on this problem.

Crucially, the role of the human in cyber security systems is becoming recognised: technology by itself is not enough. Much research has focused on end users, but there are many other types of user: developers, sysadmins, and others who are also users and part of a large system and need to be thought about. The NCSC group today brings together members from many different disciplines to tackle this complex problem of developer-centred security: social science, computer science, natural science. For developers, what may be most important is not secure code: functionality, up-time, maintainability, and usability may all be seen as more important. Security is the bottom of that stack, is often traded off against those other needs.

Helen highlighted some issues by asking: what if someone’s life depends on secure code? The obvious example is today’s pacemakers – implanted cardiac defibrillators (ICDs) – which are connected to the internet to enable them to pass data to the web portal the doctor uses to check up on each patient. At the recent O’Reilly security conference in Amsterdam, Helen heard a Norwegian woman who went on to research the code behind her implanted ICD after a bug in the software caused a brief collapse.

Among the things she found:
– Her ICD had two wireless connections, one short-range to the home monitoring unit and the other from that unit to the web portal;
– Very little security testing had been done on the implant, all of it theoretical;
– A bug in the ICD software meant that settings on the device differed from the ones technicians and doctors could see on screen, which took a long time to figure out and had a direct impact on her well-being;
– Her brief collapse while climbing the Underground stairs at Covent Garden, was because of a default setting error. The software was coded on the assumption that the device’s ultimate user would be 80 years old, with a much lower maximum heart rate than 35-year-old Moe’s. In turn, that meant the device abruptly cut her heart rate from 160 beats per minute to 80 and the ambulance had to be called;
– To date, there is no hard evidence (despite the plot in the TV series Homeland) that these devices can be hacked remotely, though short-range hacks are proven.

Many of today’s common problems have long been solved. SQL injections, for example, are a straightforward attack, known since 1998, with tools available to fix, but are still exploited and can still have high impact. Heartbleed was a buffer over-read. A bug in a code library used by telecommunications products puts mobile phones and networks at risk of takeover. Tesla has had to fix bugs in its radar systems. The Ashley Madison hack captured 11 million passwords. All are examples of coding errors.

To a security expert, the questions are obvious: why don’t developers use protective measures? Why can’t developers get it right? They should know better. How do we close the gap? Is software the weakest link? Are developers lazy or unmotivated? As Matthew Smith and Matthew Green have asked, is the developer the enemy?

In fact, there are many factors in these large systems that could go wrong. Helen called the problem a “minefield to navigate”, one that requires a wide range of skills and is becoming more complex over time. Software is just one piece that can be vulnerable to attack. The most secure software will be worked around if it’s not usable. Products need to be secure however they’re going to be used, but there is very little advice or guidance available for developers to create this usability in their coding.

The day’s discussion, therefore, is intended to map out the landscape of these problems and find evidence of what developers actually experience, in order to be able to design appropriate interventions. Based on today’s’ discussions, Helen’s group hopes to issue a call early in 2017.

 

This talk/discussion was part of a RISCS/NCSC workshop on securing software development in November 2016. The day’s discussions led directly to the research call that funded Why Johnny Doesn’t Write Secure Software and Motivating Jenny to Write Secure Software, among others.