The quality of evidence

Thomas Gross

Thomas Gross

In their talk at the June 2017 RISCS community meeting, Thomas Gross and Kovila Coopamootoo (both Newcastle), discussed the results of a small grant project on evidence-based methods in cyber security. RISCS was founded to pursue an evidence-based approach, but the question of the extent to which the community employs such methods to a high standard has been unanswered.

The motivation for the project stemmed from a workshop Gross and Coopamootoo ran in the summer of 2016 at IFIP’s privacy and identity summer school. The surprises they encountered in performing a systematic literature review of the submitted papers led them to extend these evaluations into a broader investigation of the research space.

The pair began with a systematic literature review focusing on papers in the field of human factors in security and privacy. Based on the several research questions they defined, they ran a search query on Google Scholar and ended up with 1,157 papers to review, most of which came from the SOUPS conference. They narrowed this list to 146 using inclusion and exclusion criteria limiting the search to studies with human participants that lent themselves to quantitative evaluation. Of these, only 19 were eligible for quantitative meta-analysis evaluation. The qualitative analysis revealed that, of the main themes, authentication, mostly to do with passwords, proved to be the most frequent topic, followed by privacy.

The researchers sought to answer a number of questions about these articles. First: were the studies replicating existing methods or could they be reproduced in the future? Could they say if the papers were internally valid or not? How important was the effect the papers reported and what was its magnitude?

A large percentage neither used nor adapted existing methods. Of the papers whose results could be reproduced in the future, most described their methodology and measurement apparatus quite well. In terms of determining the validity of the papers’ findings, there was a large proportion in which details were missing, and 79% did not report on the magnitude of the effect they found or explain how it was important.

In the quantitative analysis, the researchers’ goal was to determined the state of play of human factors research in cyber security in terms of quantitative properties. What kinds of effect sizes are usually found? What confidence intervals do we get? The researchers did not actually carry out a meta-analysis, which would mean focusing on particular effects and seeing how they can be combined in multiple papers, though they did use tools created for that purpose.

The researchers began by coding the papers – that is, identifying the evidence within them that supports quality indicators and quantitative reasoning – in order to identify the papers suitable for a detailed view. This proved to be frustrating, because only 8% of the papers overall explicitly report effect sizes, although the researchers could infer them for about half of the rest from the reported mean and standard deviations. Gross and Coopoamootoo found that 33% had small effect sizes that were quite situational; it was unknown if the effect would exist in real life.

The takeaways from this work:

  • We have a replication crisis. There is very little reproduction of validated methods and measures. There were no replication studies in the entire sample, even though the authors are doing well at describing the research so it can be reproduced.
  • Only 12% full fulfil the American Psychological Association guidelines for standardised reporting of experiments on human beings. This could be done much better, and improving this aspect would improve the state of the field.
  • Reporting on quantitative aspects such as effect sizes is weak. There would be considerable benefit in including parameter estimation in such research; it would make doing meta-analyses easier and substantially improve the state of the field.

In answer to a question, the researchers noted that they had looked for a correlation with the venues in which the papers originally appeared, comparing main conferences like Usenix and PET Symposium with more specialised ones like SOUPS and the LASER Workshop, and found no substantive differences between the two sub-samples. However, they did start to see differences within individual conferences.

Debi Ashenden: Found in translation

Debi Ashenden

Debi Ashenden

Debi Ashenden attributes her career move into cyber security to a misunderstanding at a job interview.

At the time, she was looking for somewhere to finish the dissertation for her master’s degree in computer science while simultaneously getting some practical credibility, which she felt was lacking from a CV that included a first degree in English literature and a Master’s in Victorian literature. So, at the job interview: “I said I was interested in how people get access to information,” Ashenden explains. She had been working as a community development officer, in which capacity she helped young people find information about travel, education, and jobs. So what she meant was opening up access. The interviewer, who was considering her for a placement at what was then known as the Defence Evaluation Research Agency – now QinetiQ – interpreted it as the opposite: how to prevent people from getting access to information. Seeing a kindred spirit, he replied, “So are we.” Oops.

Still, it not only worked out, but led to a lengthy career. At the time, DERA had just learned it was going to be privatised. As a result, the organisation wanted someone who could bridge the gap between “deep techies” and the private sector and consultants – the outside world – the newly spun-off organisation would have to work with. “They wanted a translator,” she says, “and with my background I seemed to fit that profile.” This has been the thread tying together her whole career: taking knowledge and understanding from one community and introducing it to another one.

“It’s particularly satisfying to bridge communities,” she says.

Such near-accidental beginnings are common among those who have 20-plus-year careers in cyber security for the obvious reason that the field is so new as a recognised discipline. At DERA, Ashenden worked alongside researchers in electronic warfare; in-house arguments were ongoing about whether cyber security was a subfield within electronic warfare – or vice versa.

Commercialisation cost DERA its university-like qualities, and while Ashenden enjoyed moving into consultancy, working with banks and insurance houses she wanted to do more research and accepted a job offer from Cranfield University at the UK Defence Academy and because of her prior experience in a defence establishment, found it comfortable. There, she did some teaching – “Military students are so enthusiastic and questioning” – and met RISCS director Angela Sasse for the first time. Sasse had funding from what was then the Department of Trade and Industry to write a report on Human Vulnerabilities in Security Systems. That project gathered together a group of researchers and introduced Ashenden to RISCS deputy director Lizzie Coles-Kemp (Royal Holloway). “We realised we had a lot in common in the way we got into academia and the way that we think about cyber security.” Shortly afterwards, EPSRC funded a “sandpit”, and both Coles-Kemp and Ashenden were accepted. That work led to the VOME project.

“That was a fascinating experience,” Ashenden says, “because on paper neither of us had the track record to win something that size. We didn’t know what we shouldn’t do, so we kept doing things until someone said we couldn’t.”

The project gave Ashenden the opportunity to find her feet. VOME involved working with young people to make participatory videos; for example, a youth group in Newham made a music video about online identity, how they saw it, and how they valued it. The project also made a trading cards game around privacy. “It grounded where I saw my research fitting, and enabled me to know where I didn’t fit,” she says. In addition, she enjoyed community-based research because she was equally keen on trying to help security practitioners to do their work better.

And then HMRC lost two CDs containing data on all the households receiving child benefit in the UK. This 2007 incident prompted a sea change. It was many years before security practitioners stopped using it as a poster child for the lack of security awareness. Clearly, the practice of security had to change, but few believed it was really possible. Because Ashenden had a good network from the work she’d already done, she and Sasse teamed up on the paper CISOs and Organisational Culture: Their Own Worst Enemy (PDF). This research found that CISOs didn’t believe they had the skills to effect the necessary cultural change. As a result, rather than going out and engaging with employees, many security practitioners prefer to buy security awareness training packages.

This led to work on another paper, Can We Sell Security Like Soap? A New Approach to Behaviour Change, with Darren Lawrence (Cranfield University). This work sought to discover whether techniques known as “social marketing”, which are used a lot in health care campaigns, could be used in cyber security. This technique, Ashenden says, “has a nice framework that’s easy to step through with people who are not social or behavioural scientists.” A key difference from health care, however, is that in cyber security it’s difficult to identify non-divisible actions, and when you do the actions you’ve identified may not be sufficiently significant to warrant a behaviour change programme.

Currently, through the ESRC CREST project, Ashenden and Lawrence have been working to improve the relationship between security practitioners and the rest of their organisation. This effort has led to the Security Dialogues (PDF). Ashenden wanted to include both security practitioners and software developers in the workshops she created for this work, but “I couldn’t get the developers to attend. They didn’t see the point.” The researchers sought to create a “safe space” where the participants would be comfortable; it helped that Ashenden had worked with security practitioners for a long time and knew about the problems they faced. Ashenden continues to work closely with Lawrence.

More recently, software developers have begun to express interest: as organisations move to cloud infrastructures they also push towards continuous integration and continuous delivery. Speeding up cycle times means that security has to be included throughout. Among the newly interesting topics are issues like what secure code is; what secure development looks like, the risk perceptions of software developers, and the practice of code review. Ashenden is now exploring these topics through the cSALSA project.

In 2016, Ashenden moved to the University of Portsmouth, her first time working at a mainstream university, since Cranfield is defence-based and all post-graduate. Going forward, Ashenden says, “I continue to be interested in the notion of how to support security practitioners and developers. I’m also keen on the idea of building better dialogues and engagements, and, increasingly, on finding ways to foster security dialogues with those working in AI and machine learning.”

Adam Joinson: Human behaviour and the internet

Adam Joinson

Adam Joinson

One of the most important aspects of RISCS is its multidisciplinary nature. Adam Joinson studies human behaviour online. At the June 2017 RISCS meeting, he introduced the cSALSA project to study how the way people talk about security changes over their lifespan.

Joinson began researching the psychology of how people use the internet as early as the mid-1990s. His academic career has grown up alongside the web; his earliest days online, when he was starting his PhD on self-esteem at the University of Hertfordshire, were in the era of Gopher and FTP, early protocols for sharing information. His first study of human web behaviour began in 1995 when he wrote to the webmasters of some football teams and got them to examine their log files to compare patterns of access to the teams’ wins and losses.

Originally, Joinson had it in mind to be a journalist. He picked psychology as his one optional subject. Besides the fact that he enjoyed it, he could see the potential in crossing psychology and the material he was studying separately in economics. Behavioural economics was just beginning as a field, and Daniel Kahnemann and Amos Tversky were publishing their earliest work. During the final studies for his PhD thesis, he shared a lab with students in cognitive science and computer science, who introduced him to the early internet.

For a 1999 study, he set up a web-based survey with measures of socially desirable behaviour and studied the responses. The resulting paper Social Desirability, Anonymity and Internet-Based Questionnaires (PDF) compared anonymous and non-anonymous responses and found that anonymity did make some difference in that respondents who answered anonymously were more likely to report socially undesirable behaviour. However, the simple fact of the survey’s being conducted online had a unique effect on top of that. This early study showed that there is something significantly different about responding online, compared to offline surveys. The study has since been replicated many times, and even today the outcome is not much different.

The paper established two important principles. First, even though there are demographic issues about who gets online and who responds to online surveys – which still need to be taken into account when researchers use Amazon’s Mechanical Turk and do online polling – the paper demonstrated that it was possible to collect data in this way. Second, although most people at the time thought people didn’t present their real selves online, this paper showed the opposite: people actually became more candid. One result of this paper was the creation of online polling groups such as YouGov. “People came to ask how they could design their online system to get good, valid responses that would convince the customers,” he says.

This work led to a series of studies that compared the way people talk face-to-face with how they communicate online. In one study, Joinson manipulated self-awareness: some participants could see video of each other, some were watched by someone else, some were distracted by a cartoon playing in another window while chatting. The studies found that these various elements have a psychological impact that changes how people communicate. The resulting 2001 paper, Self-Disclosure in Computer-Mediated Communication: The Role of Self-Awareness and Visual Anonymity, found an association between high levels of spontaneous self-disclosure and the combination of heightened private self-awareness and reduced public self-awareness.

In 2008, Joinson’s interest began to shift from online anonymity to privacy as part of some of the first work studying how and why people were using Facebook. The resulting paper, “Looking at”, “Looking up” or “Keeping up with” People? Motives and Uses of Facebook (PDF), used a uses and gratifications framework to understand why people were using the social network and how those patterns of use related to what they were trying to get out of it. Joinson also found that people using it in order to meet new people had different approach to privacy settings than those using to reconnect with old friends. This paper kicked of a lot of work still being done about how patterns of use relate to privacy and security. A journalist using Twitter, for example, has specific motivations that determine how they relate to people and how open they are as opposed to someone using it to keep up with friends and family. The lesson, Joinson says, is that design decisions in this area have to match privacy and security settings to users’ goals.

A final bit of social media work in 2016, led by Joinson’s PhD student Ben Marder, focused on the chilling effect of having mixed audiences. This work was based on Michel Foucault’s idea, based on the Panopticon, that people who know they might be surveilled wind up censoring their own behaviour. The results they found, published as The extended ‘chilling’ effect of Facebook: The cold reality of ubiquitous social networking, had a twist: people, especially younger ones, manage the social anxiety of wanting their presentation to match observers’ expectations, by changing their offline lives. They are conscious that pictures will be taken, posted, and tagged – and therefore have no-camera or no-tagging rules at parties. The Internet of Things is likely to threaten this approach by taking away the opportunity to make our own rules.

Simultaneously, Joinson was developing his ideas about trust. The 2010 paper Privacy, Trust, and Self-Disclosure Online (PDF) examined the interplay between concerns about privacy, the apparent trustworthiness of sites, and the adoption of privacy-enhancing behaviour. The result was, Joinson says, “a really odd relationship with privacy and trust”: people think that sites that look trustworthy offer better privacy protection. Facebook poses a particular challenge, in that users are managing dual trust relationships – one with the people they’re connected and the other with the service itself. Yet, the service is often transparent to its users, who focus instead on deciding what to share with their connections based on how they think those connections will treat the disclosures.

Security awareness opportunities: retailers

The idea for this small grant project came when Jennifer Sheils, the head of partner networks, serious and organised crime for the Home Office’s Cyber Aware Campaign, was told of an experience buying a laptop. “Do I need more software?”, the customer asked. “It’s an Apple, so it should be fine,” they were told. The story led Sheils to wonder: who gives advice to whom? Who should be responsible? RISCS director Angela Sasse (UCL), Simon Parkin (UCL), and Lynne Coventry (Northumbria) set out to study how security advice is delivered in a retail context with a view to creating a model that will establish best practice that can be rolled out to other retailers and sectors. Brands will want to see commercial benefits, but may also see increasing customer trust by delivering good security advice as an incentive.

Angela Sasse

Angela Sasse

Cyber Aware is the government’s first and only cyber security public awareness communications initiative. As such, it is intended to deliver official and expert advice to help the public and micro businesses focus on actionable cyber secure behaviours and make good cyber security habits second nature. Its priority advice includes: use a strong and separate password for email; install software and app updates; use a screen lock and don’t send sensitive data over public wifi; keep backups; and use two-factor authentication where possible. This advice is updated and prioritised as needed based on incoming threats. Cyber Aware has more than 300 cross-sector partners. Tracking results for 2016 showed that 11 million people and 4 million businesses were more likely to adopt these behaviours as a result of seeing the campaign, but the campaign needs to scale up and needs the help of industry to do it.

Working with the retail sector seems particularly promising. Research says that trusted brands may have an important role to play, as people expect to receive good advice from the organisations they share data with. Research also says that people are most receptive at the “point of incidence” – a point when individuals are doing something relevant such as buying a device or entering a password. The goal of this project, therefore, was to:

  • Establish evidence regarding the power and influence of trusted brands and their sales staff in delivering cyber security advice in the physical retail environment;
  • Identify the most effective channels and interventions in physical retail environments;
  • Provide retailers with evidence of the commercial benefits.
Lynne Coventry

Lynne Coventry

Lynne Coventry, director of the Psychology and Communication Technology Lab, explained the psychological background using BJ Fogg’s behavioural model, which captures the interaction of motivation and ability to change. When a person’s motivation is high or the behaviour is easy, a behaviour is easier to activate. However triggers – cues to action – have a key role to play as motivation and ease are not always sufficient by themselves to inspire change. Habits also play an important role, in that as much as 40% of our activities are repeated almost daily, usually in the same location. Although habits are slow to develop once they have been established they become automatic, and therefore easy to perform and, subsequently, extremely hard to break, even when the individual is motivated to do so. It’s easiest to change habits at times of transition. The hypothesis for this project, therefore, is that buying a new computer is a transitional moment that has the potential to disrupt established bad security habits, though we must also be careful not to disrupt good security habits. This piece of research, which worked with both customers and retailer, sought to establish the current issues and state of knowledge. For future work, Coventry would like to look at the effectiveness of different approaches to trying to change behaviour. Involving the intended audience in designing these approaches is key, she says, as this involvement has been shown to increase the likelihood that the approach will be effective.

Simon Parkin

UCL researcher Simon Parkin

Simon Parkin described the fieldwork, which included 85 customer interviews across four branches of a major UK retailer and 21 interviews with sales staff. The approach was exploratory and qualitative to understand the underlying knowledge, ability, barriers, and motivators. Operating independently of, but with support from, the retailer, the team engaged people at the point of sale, let them express their opinions in their own language, and sought to identify opportunities for improvements. They attended morning staff briefings to introduce themselves and the research, and to establish how to work while minimising disruption.

The customers, who were offered a £20 voucher in return for their time, were purchasing a new computer or mobile device, and in 15-minute interviews the researchers sought to understand:

  • Their level of awareness of cyber risks;
  • Where they acquire their knowledge;
  • What guidance and advice they expect from the sales staff;
  • How different forms of advice would be perceived.

The sales staff were interviewed as they were available, and asked about:

    – The queries they get from customers about cyber security;

  • How well equipped they feel they are to answer;
  • How they might be able to receive up-to-date knowledge and relay it to customers.

The researchers found that customers were often replacing a device, which could potentially be in the region of six to 12 years old; that many of these devices were used for many activities, both work and personal, and use was shared with other household members. One customer wanted an up-to-date computer for visiting grandchildren to use. Customers typically based their decisions on features such as screen size, portability, performance, and brand, and researched their purchases either online or by browsing in-store and talking to staff; a few took the advice of a “techie friend” or an IT service provider, IT staff at work, or a bank or Internet Service Provider (ISP). Anti-virus use varied considerably.

There were a number of opportunities for interventions. The stores sold security products such as anti-virus and external hard drives, and these formed part of the sale conversation. Staff also felt it was important for customers to have some security and keep it up to date, but didn’t want to bog them down with details or scare them out of the purchase. Customers also varied in how amenable they were to advice, and displayed varying levels of ability and motivation.

A representative of the retailer said she saw the research as a great opportunity, both to improve the advice given to staff and to solidify the retailer’s trusted relationships with its customers.

In conclusion, Sasse said the research suggested that the point of sale for a new computer is an opportune moment to ensure computers have appropriate security in place. However, these efforts must not be perceived as an attempt to up-sell; staff knowledge has to be kept up to date via a reliable source; and it will be necessary to ensure that the security chain isn’t broken a year later. The group has obtained follow-on funding to explore further how to fit advice into the sales process.

In answer to questions, the group indicated that they recognise that the model will have to be adapted for different retailers and demographics, though the goal is consistent messaging.

Shujun Li: The ACCEPT project

Shujun Li

Shujun Li

The root of the Addressing Cybersecurity and Cybercrime via a co-Evolutionary aPproach to reducing human-relaTed risks (ACCEPT) project, says its leader, Shujun Li (Surrey and Kent) is that while research has found that personalisation and contextualisation are crucial elements in many digital systems involving human users, not enough has been done to include them in cyber security applications especially those promoting awareness.

Take, for example, password advice. People are constantly told what kind of passwords they should or should not choose, but these instructions rarely take context into account and are often too abstract to be actionable. In many cases, complicated password policies even make the advice irrelevant. Yes, you want a strong, unique password for your bank account, but the same does not apply for a news site that requires you to create an account just to read a few articles. More seriously, the victims of work-at-home scams have a problem that can’t be solved just by changing their password to one that’s more robust. Instead, their problem is that they’ve been lured into committing illegal acts without their understanding, and they may wind up in prison as co-conspirators. A yet different context applies to operational staff in the control room of a nuclear power plant, where a password is often used in tandem with other authentication mechanisms such as biometrics and hardware tokens. In the last example, because the consequences of any attacks may be disastrous, the security of the whole system cannot be guaranteed with just a password, no matter how carefully staff follow guidelines in creating it.

“To us, the solution is as simple as, bringing humans back into the whole picture,” says Li. “We need to have a human-centric approach, and do it constantly and consistently. Watching what they do and providing timely feedback can create a virtuous circle that both encourages them to behave in desirable ways and helps the rest of us understand better what the criminals are doing with the people they target. Better reporting about the problems they encounter also helps us gather data to provide personalised feedback through profiling, data mining, and machine learning.” The goal, Li says, is to be able to find solutions that can be adapted to different groups of people (personalisation) and different types of problems (contextualisation) in cyber security and cybercrime. In social science, delivering this kind of targeted message is often called a “segmented approach”; law enforcement – notably Neighbourhood Watch and the Home Office – has adopted it for fighting both physical-world and cyber crime.

It seems obvious to say that one security message cannot possibly fit all – children and old people, men and women, of all levels of education, and with all attitudes toward privacy. “We need different ways of encouraging and engaging. Even small differences in wording may matter a lot.”

The ACCEPT project intends to support this personalisation and contextualisation by combining knowledge drawn from social sciences (criminology, psychology, business), engineering, and physical sciences (computer science, security engineering) to create a theoretical socio-technical framework and a set of software tools. The framework and tools are intended to help organisations both to personalise and contextualise communications and provide feedback to users in a more human-centric manner. They will draw on both our understanding of human behaviour – how criminals target victims and why victims fall prey to scams – and emerging ICT technologies such as machine learning and mobile computing. Creating this framework, Li hopes, will make it possible to improve upon current cyber security awareness campaigns to better engage people and influence them more effectively in a positive direction. In what remains of 2017, ACCEPT will conduct a number of workshops, interviews, and other user studies to gain feedback on the nascent framework and its application to real-world scenarios from police forces and other stakeholders, as well as the general public.

This work was partly inspired by a white paper written as part of the first phase of RISCS in collaboration with Hewlett Packard Enterprise, Awareness Is Only the First Step: A Framework for Progressive Engagement of Staff in Cyber Security (PDF). This paper, which studied methods for raising awareness of cyber security among an organisation’s staff, made the point that awareness training can’t just be done once. To be effective, it has to be a continuous campaign so that security-aware behaviour becomes a habit. It’s this stage – habit – that Li would like the broader public to attain: “Our approach is to make it to the level of just being part of life.”

One of the tools the project hopes to create is a digital platform that would allow individuals to share data about their behaviour with trusted organisations they select. In return, the organisations would provide helpful information regarding cyber security and cybercrime, creating a feedback loop that would both habituate good behaviour, as above, while giving communities and organisations a better understanding of what is happening in the real world.

“We understand it’s very ambitious,” says Li, “but we want to create preliminary evidence facilitated by a number of technologies that actually help people to be more willing to share information and make them feel safer.”

The project intends to investigate two use cases to make the research more focused (contextualised). One is “traditional” cybercrimes such as work-at-home scams, in which people are promised commission fees for transferring money (“money mules”) or reshipping online purchases (“reshipping mules”) abroad. The people who are recruited for these “jobs” may not really understand what the job is or what the potential consequences are for either themselves or the people whose money is being stolen. The 2015 paper Drops for Stuff: An Analysis of Reshipping Mule Scams (PDF) suggested that some criminals who recruited reshipping mules seemed to cease communicating around the time the first monthly payments are due. If this hypothesis, which was purely data-driven, can be verified through software tools that engage directly with reshipping mules, it can be used to warn future potential victims so that they will not fall into the trap.

“Many of those people are not clear about those consequences and can’t evaluate them,” Li says, noting that one of that paper’s authors, Gianluca Stringhini, is a member of the ACCEPT project team. Like this paper, much other similar research work has been data-driven, but Li hopes that by adding human behaviour and user opinions collected via software tools the project will be able to “fill the gap of what is happening in the real world”, understand why criminals are successful, and how we can help potential victims and law enforcement to outwit criminals.

“If we can engage money mules, whether in the process of looking for jobs or already recruited to work at home, we can actually push more meaningful messages to them,” Li says. “For instance, if we know they are likely already working as money mules we can show them the consequences to victims of what they do, which could lead to an empathy effect so that some of them stop cooperating with the criminals or even decide to cooperate with law enforcement to track down the criminals. We can also show them the legal risk attached to being accomplices in order to persuade them to stop if they know they are working for criminals. Legal messages like these must be context-dependent, since the laws regarding money mules differ from one jurisdiction to another.”

The project wants the second use case to reflect the most likely future as the Internet of Things is being more and more widely deployed: hybrid cyber-physical crimes and the much more difficult problems they will cause. The group is particularly interested in transport, an area in which the project’s partner TRL, a transport research company, has a lot of expertise.

“We know much less for this part,” Li says. There is already a good deal of coverage of car hacking and ongoing work studying the potential for cyber attacks on infrastructure such as the railways and energy systems. For ACCEPT, infrastructure is a very different challenge to think about. First, employees become the main source of human-related risks. As a user group, employees are vastly different from citizens; there are greater opportunities for monitoring within an organisation since many employment contracts already allow it for security and safety reasons. Second, many forms of cyber-physical crimes are so new that even law enforcement lacks sufficient information, and the technologies are evolving so quickly. The project will treat this use case as more speculative, focusing on what will happen in the future and what we can do now.

Madeline Carr: It’s about power

Madeline Carr

Madeline Carr

At first glance, it’s hard to relate the work of Madeline Carr (UCL) work to cyber security. She talks about policy-making and power relations, not passwords and phishing attacks. But bear with her. She is bringing into RISCS a discipline that hasn’t been represented before: international relations. The cyber attacks of the last year, from international hacker groups spreading fake news, to the first Internet of Things botnet attacks, to the 2016 hack of the Ukrainian power grid, all show the need for this approach.

Those who remember the earliest stirrings of the internet remember the rhetoric that accompanied its arrival. The internet was going to democratise the world, flattening hierarchies and changing power structures. Twenty-five or so years later, things have not gone the way the pioneers predicted. Instead, we have companies whose revenues are larger than those of some countries and persistent debates over governance.

By the time Carr, who was born in Australia but moved to Canada at the age of three, was hearing those claims about the internet, they were already familiar from multiple earlier contexts. The first was in the mid 1980s, when she was working in desktop publishing and everyone was talking about the democratisation of design technology and imagining a future in which people could print their own newspapers. Over time, however, “The big lesson was that everyone began to understand the skills that typographers, graphical designers, and professional writers brought that had been completely misunderstood.” The technology does of course offer democratisation, but that didn’t mean you could just hand the latest DTP program to a secretary untrained in any of these areas and put them in charge of print production.

The next time it was film. Because of digital transformation, “Now anyone could make a movie.” But again: taking away the financial constraints of filmmaking didn’t remove the need for ideas and skills.

Then it was web design. Carr was always interested in what was and was not changing. In the latter category were perennial skills like writing and visual design, skills that went back a long way and were always needed.

All of this was behind her when she started an undergraduate degree in English literature in 1999, when she, her husband, and her first child had all moved to Tasmania and she was running the local branch of the national film, TV, and radio school. She loved to read, and thought the degree would help her in reading and assessing film scripts. “I hated English literature at uni,” she says, “but one day in the second year I stumbled into the wrong lecture and there was a charismatic guy talking about Dylan Thomas’s poetry and how it contributed to his sense of nationhood. And I thought, ‘What is this?'” It was, in fact, a political science class, “which I had always thought was about voter patterns and other really dry, dull issues.” She found the subject so interesting she changed immediately to a double major in political science and then spent a year doing a master’s. With young children still at home, she then thought it made sense to go on to do a PhD, beginning in 2006, when she won a scholarship to the highly-regarded international relations department at Australian National University.

It was in the course of doing that degree that she adopted her present research focus. “I was working on American power in southeast Asia,” she says, “but I couldn’t find a single new thing to say about it, and the thing that kept coming back to me was, how come in all this stuff I’m reading about American power, global politics, and so on, there’s no mention of the biggest transformation that is taking place in the world today – what’s happening in digital technologies?” She wrote her PhD on American power and the internet in international relations. And there again were the same tropes: big assertions about what the internet would mean for the power of the US and other countries, but “without any teasing apart of that technology”. And this was even though international relations tends to view technology and power as closely related on the basis that the state that dominates in technology – weapons and production – is the one that most likely to prevail in a war.

Carr’s work in this area began with examining what US policy makers actually said they wanted the internet to do for the country. In the US, the Clinton-Gore administration’s vision was to use the information superhighway as a way of both promoting human rights and expanding US markets. “They saw that it could shore up American power in an appropriate way for the post Cold War world.” The policies they came up with, she says, “formed an incredibly successful approach that America still benefits from even now.”

But the US approach, while largely supported in the West, has been less accepted elsewhere. Internet freedom has been a particular sticking point. “Yes, there’s an important human rights element in this US policy,” Carr says, “but unless you acknowledge that it’s also a very powerful foreign policy tool, then you’re missing why some other states will never fully acquiesce.”

As the internet has expanded around the globe, the easy assumption that it would promote human rights and shore up post-Cold War American power isn’t shared by other countries, which have their own agendas and ambitions. The (largely) US-based internet pioneers had a specific set of democratic values that they tried to embed in the technology they exported. But new developments like the Internet of Things are being defined and built in other countries with different values and goals. Even in the mid-2000s, Carr says, it was noticeable that China was developing a clear strategy for what it wanted to happen over the coming 20 to 30 years. The combination of its manufacturing base and advanced research into new technologies means that its economic expectations are likely to contribute to some power transitions in global politics of the coming decades. Essentially, the shared values of early-adopter countries like the US, Britain, and Australia, “are likely to be of decreasing consequence when you look at the shifting demographics online. The sheer numbers still to come from the developing world and Southeast Asia will vastly outnumber them.”

Carr moved to Britain in 2012, taking up a newly-created position at the University of Aberystwyth for someone working in international politics in the cyber dimension. There, she developed a master’s degree programme and worked in the UK policy community. Two years ago, she moved to Cardiff to do similar work. In 2017 she moved to UCL to take up a position in the multi-disciplinary Department of Science, Technology, Engineering and Public Policy, where she will develop a new digital research agenda and an MPA in Digital Technologies and Policy.

In the 2015 paper Power Plays in Global Internet Governance, Carr considered all this in the light of discussions over the future of the Internet Corporation for Assigned Names and Numbers as the US Department of Commerce relinquished the last vestiges of its control. The rhetoric surrounding the importance of ensuring an ongoing multistakeholder model of governance – along with suggestions that the internet’s version of it might inspire those concerned with other “post-state” issues to adopt it as a model – led Carr to tackle the subject. “I felt that it was so untouchable – in the sense that it was widely accepted as a ‘good thing’ – that I wanted to say, hang on: multistakeholderism itself isn’t an end goal. It may be a mechanism for doing something, but look at the power dynamics behind it.” Whereas, she says, “If you unpack multistakeholderism it’s easy to see why it’s perceived as an American or Western mechanism for the protection of power.” To date, private-sector representation has largely been limited to huge US companies, and even the NGOs involved as “civil society” are often largely US-funded even when they’re based in other countries. “Multistakeholderism is a positive approach, but ignoring the flaws in contemporary practices only weakens it. We know that if global arrangements privilege one group too much, people won’t adhere to them.”

Carr believes it’s essential therefore, to look ahead: even if something is working now and for years into the future, smart policy-making requires understanding why it may not be sustainable in the long run and finding better options that will produce the desired outcomes.

Here is where Carr lands on cybersecurity. Her PhD work had three prongs: internet governance, network neutrality, and cyber security. She saw them all as deeply entwined, although as her research agenda has developed, cyber security has drawn her attention because of her interest in global security. In “Cyberspace and International Order”, a chapter for the book she co-edited, The Anarchical Society at 40, Carr looks at the implications of the problem of attribution in cyberspace. Most views about this tend to be polarised. Some people insist that it’s not a problem and that war and political conflict are still very much the same as ever. Others think that anonymity in international conflict places the world in great danger because in the past it’s been generally clear in international relations who was acting against a given country. “IR scholars need to unpick this,” Carr says. “What difference does it make to the maintenance of international order that states can act under the cloak of plausible deniability?” The kind of strategic studies that have focused most on this are “one dimension, but not the whole story of global security”.

Another of her examples is public-private partnerships, which are also rarely critiqued. In 2016’s Public-Private Partnerships in National Cyber-Security Strategies, Carr highlights the problems of responsibility in these “partnerships”, pointing out that both the US and UK have built heavily on the idea of PPPs as the cornerstone of the national cyber security strategy. “It’s not possible for governments to relegate responsibility for national security to the private sector,” she says, “and the private sector certainly doesn’t want that responsibility, not the least because they would be concerned about the liability. So what exactly is the nature of this partnership?” She points out that PPPs require shared goals, mutual benefits, and clarity about roles, responsibilities, and the hierarchy of relationships, none of which characterize the PPP in national cyber security strategies. “To their credit, the most recent UK National Cyber Security Strategy really acknowledges that and takes a much more realistic and pro-active approach.”

The driving principle behind Carr’s work, therefore, is to take a multidimensional approach to questioning the things that are too widely accepted and not critically examined or challenged. She does this across disciplines – technology, law, social science. “Essentially, I’m interested in the global politics of technology and the way the world will change in terms of security and international order as technology continues to develop” she says. “I think we need to do a lot more work to understand how the world is changing. And of course, those factors that remain constant throughout significant technological shifts. You still can’t make a good film without creativity.”

EMPHASIS: Studying ransomware

Eerke Boiten

Eerke Boiten

The best-known example of ransomware to date is 2017’s Wannacry, which disrupted numerous organisations including at least a third of NHS trusts. Even before that incident, Eerke Boiten (De Montfort) was starting work on EconoMical, PsycHologicAl and Societal Impact of RanSomware (EMPHASIS), a project to study ransomware and devise interventions. EMPHASIS includes researchers from the fields of computer science, criminology, psychology, economics, and law across five universities (Kent, Leeds, De Montfort, Newcastle, and City), and has several partners from industry, law enforcement, and universities abroad.

As Wannacry showed, ransomware can extend to critical national infrastructure. Yet to date most ransomware has been relatively crude. There is, says Eerke Boiten, the project’s leader, the potential for far more technological sophistication: Wannacry was relatively simple, yet still caused some havoc. In addition, victims play an essential role in the stories of these attacks. Wannacry relied on victims who had failed to update Windows XP, yet was asking them to pay the ransom in bitcoin – an apparent contradiction that becomes less counterintuitive once you consider the required interactiom between criminal and victim after infection. “It’s an interesting technical problem, because cybercrime is a big thing but organised cybercrime is an even bigger threat,” Boiten says, adding that when you look for the sort of cybercrime that might result in large gains for organised gangs, ransomware is a good candidate.

The project asks the following research questions:
Why is ransomware so effective, and why are there so many victims?
Who is carrying out ransomware attacks?
How can police agencies be helped?
What interventions are required to mitigate the impact?

The overall goal is to strengthen society’s resistance to ransomware to make it less effective, protect and prepare potential victims, whether organisations or citizens, and pursue the criminals.

Since this is an investigation of a known, existing problem rather than a quest for a use case for a proposed solution, gathering data from law enforcement, SMEs, technical support services, and CERTs, as well as public surveys, interviews with stakeholders is crucial. Besides these sources, the group also proposes to use script analysis, behavioural analysis, and profiling to understand narratives for both criminals and victims and build typical ransomware scenarios that can be used to model these attacks at a better scale and understand them from an economic point of view.

From the economic point of view, the researchers want to understand how ransomware works as a business in order to find the weak points where adaptive interventions can be made. From the technological point of view, they want to pinpoint the strengths and weaknesses that can be disrupted and how it might evolve in future. Finally, from the psychological and criminological side, the project will study who the victims and criminals are, and what that means for the future.

The world has already changed a bit since the project began, as WannaCry was followed by NoPetya; the goals of such attacks are widening from pure financial gain to include disruption. In a recent attack, ransomware may have been merely a decoy to deflect attention from a more malicious attack trying to siphon off money from a bank. At the moment, we don’t really know who the perpetrators are, since these attacks are easy enough to assemble that they could be as technically limited as “script kiddies” or sophisticated as a nation-state. The researchers expect ransom amounts to rise to just above the amount people are willing to pay as these attacks begin to incorporate price discrimination, perhaps based on personal information obtained by the malware from the computer it has infected.

“A recurring theme in our project is the possibly false confidence that criminals don’t read academic papers,” Boiten says. This notion is based on the fact that some potential schemes that were reported 20 years ago still haven’t been seen in the wild.

In all this, data is crucial. Unfortunately, the actors who have it are organised crime, who will clearly profit from analysing it.

In the discussion that followed Boiten’s presentation of this project at the October 2017 RISCS meeting, it emerged that some aspects are emerging that weren’t in the original research design. These include, for example, conflicts between varying legal requirements for systems such as competition and safety, or personally identifiable information versus the need to generate value. Boiten noted that in the WannaCry case there at least six actors you could call responsible: the NHS, which didn’t update its systems; the government which didn’t finance those updates; the Shadow Brokers; the NSA; Microsoft for discontinuing free provision of security updates for XP; and then the criminals.

Angela Sasse noted the added burden placed on those under attack in that the aftermath tends to see so much bad advice, citing WannaCry as an example. The most important advice – making sure you have clean backups – appeared only on the NCSC’s site. Improving the dissemination of good advice is also a goal of EMPHASIS.

Other commenters noted that insurers who pay ransoms do have some information, though they may not deem it in their interest to share it. One who had seen the transcript from a ransomware group’s technical support line noticed the length of time spent haggling and advising how to get hold of bitcoin. It might be feasible to set up a group of deliberately infected machines and gather data by communicating with the criminals. Boiten noted there are ethical issues with doing that, though it seemed a promising possibility. Finally, businesses try to assess their threats, opportunities, strengths, and weaknesses. One could reverse this: what threats become opportunities? Doing that would ensure that the results, when published, would be relevant to businesses and presented in the language they use and understand.

Communicating with the board: setting the scene

This is the first session of a RISCS/NCSC workshop that explored supporting boards in making decisions about cyber security in June 2017. It led directly to the research call that closes December 1, 2017. The day, which is summarised in full here, was broken up into three sessions. The other two are The problem with vendors and How do boards assess and manage cyber risk?.

Rachel C set the scene by discussing misunderstandings and failures on both sides to understand each other’s priorities and requirements. Both sides have to collaborate to work out the critical elements of cyber risk to make the best decisions. Practitioners complain that boards don’t ask the right questions or don’t fund the right things; boards complain that they are given an avalanche of technical detail they don’t understand that provides insufficient information about how those details relate to the level of risk. In some cases, the issue may be language; the RISCS cSALSA project is studying the different ways people talk about cyber security at different stages of their lifespan.

A key theme that emerged from the last Practitioners Panel was metrics, which can be an effective way of communicating. However, the fact that cyber risk can’t be reduced to a simple 1 to 5 measure leads to many questions about what can be measured and how, and what those measurements mean.

Rachel set four questions for the day:

  • What would help the board to better manage cyber risk?
  • What disciplines and approaches could help us solve this problem?
  • What’s wrong with current approaches?
  • What information does the board need, and what information do security practitioners want to report?

To further set the scene three presentations tackled the question, “What’s wrong with metrics?”

Tim Roberts, managing director of the IBM subsidiary Promontory, offered the board’s point of view. A day earlier, Roberts met with a company in the business of supplying sensitive data to clients. A severe data incident that led to the discovery that the company may have been delivering corrupted data for the last ten years means it will have a lot of compensation to pay, plus regulatory fines. The root cause will be a combination of failed processes, human error, negligence, neglect, and no one acting on things that didn’t look right. So one problem Roberts sees is too narrow a focus on a single definition, leading to his first question: should cyber metrics cover just external attacks or a wider set of risks to a business?

Boards, he says, want the latter: what are the risks to my business that are concerned with systems and data? What do I need to manage/steer the business?

He has observed some common problems:

  • Metrics are absent or scarcely visible. He cited the example of a bank’s presentation to its risk committee; after myriad pages on individual loans, there was one page on operational risk, on which was a box discussing cyber risk.
  • Presentations are not proportionate or clear, or offer only a partial picture.

However, he also sees good examples:

  • There is a lot of business literature that explains what good metrics look like. Central banks, for example, believe they have a clear sense of best practice. However, setting a “risk appetite” is not common outside of financial services.
  • Different dashboards and metrics can be used to provide separate information at board and practitioner levels. In a major UK bank, Roberts has seen a cascade of 300 operational metrics across 100 units that feed up into 12 master metrics the board uses to set its risk appetite. The board can look at a cross-section at any level in between if they want to focus on a particular area. Businesses need systems suitable to their scale.

Other observations:

  • Metrics have to be updated in response to changes in the business and its risk environment. If a company has engaged in a major technology transformation, the risks associated with that programme need to be identified and presented.
  • Risk data is helpful, but presenting information that highlights potential conclusions and/or actions is more so. That approach leaves scope for manipulation, but highlighting what’s changed, what isn’t in line with expectations, and what’s deteriorating is valuable. When the European bank mentioned above redesigned its information pack, it began with a single page of all risks, highlighting the ones that they deemed needed focus, but retaining the ability to roam freely through the supporting information.

Andy Jones, the CISO of Maersk Transport and Logistics, previously held the same position at Unilever and Sainsbury’s, and before that spent six years as a researcher; he set out a practitioner’s point of view. It appeals to him that “cyber” is derived from “cybernetics”, which was coined in 1948 by Norbert Weiner, who derived it from the word for “steersman”, because Maersk is the world’s largest container ship operator. The hype and FUD (fear, uncertainty, and doubt) we live in have been necessary in order to get cyber security taken seriously, but security practitioners have no other cards to play, and boards get tired of hearing the same old scare stories while reading newspapers that say otherwise. Businesses want to embrace the digital market for obvious financial reasons but are mired in threats and hype.

A crucial question is: who’s your audience? Generally speaking, CISOs are unpopular because they are a source of bad news and because board members don’t want to ask questions that make them look stupid. However, they are nonetheless smart and – particularly non-executive members who sit on other boards – often ask very good questions.

They are generally:

  • Interested in return on investment;
  • Uncertain how real the threat is;
  • Unhappy to hear that it can’t just be “fixed”;
  • Uncertain how to judge cyber risks against other risks;
  • Uncomfortable with the topic;
  • Concerned about both organisational and personal reputational risk.

Practitioners face a number of problems in trying to communicate with them such as different linguistic dialects and the varying levels of maturity across a company, although to Jones’s surprise cyber risk turns out to be one of the more advanced risk disciplines. A more complex problem is that so many metrics can be read any way you like: if a high number of viruses is detected, does that mean detection is effective or that patching is poor? If there’s a low number, is that rising or falling. The correlation between any of these and risk is hazy.

Boards do respond to the following, which unfortunately are all poor risk indicators:

  • Peer benchmarking;
  • Compliance measures;
  • Graphs and colours;
  • Legal and regulatory drivers.

Often, cyber risk translates badly into standard risk templates. For example, a cyber attack causing damage valued at £10 million is financially insignificant to Maersk, a £25 billion company whose bigger risks lie elsewhere such as the risk that political upheaval will close a port. That said, many risks, such as the chance of a cyber incident, are certainties, not probabilities and should be treated as such. However, it is often not clear where to report new risks as they emerge.

Communicating with any board has to take into account company culture, the language of the organisation, and the personal outlook and agenda of individual board members. Analogies and storytelling may help. Inspiration may come from other industries such as gambling, aviation, military intelligence (although Jones felt it’s been too dominant in security to date), polling, and advertising, plus new areas such as big data, fuzzy logic, quantum theory, and chaos theory. Above all, Jones concluded, challenge the assumptions. For example, it’s wrong to think that if you can’t measure something you can’t manage it.

Angela Sasse, director of RISCS, offered an academic’s overview of the frustrations of trying to collect data of sufficient quality to develop theories and models – and then more data to test them. In her career, Sasse has conducted hundreds of interviews with organisations and surveys with thousands of employees.

The two questions RISCS began with in its first phase require data to answer with scientific rigour:

– How secure is my organisation?
– How do we make better security decisions?

In Sasse’s experience, many organisations measure things that are easy to measure. The success of companies like PhishMe is based on the fact that it’s easy to compel people to take their training and easy to measure the results. But, Sasse asks, what does it really mean and what are the unwanted side effects? What if clicking on fewer links means losing customers?

Practitioners rarely understand that academics wanting to test a new metric need to take a baseline measurement so they can measure effectiveness. Instead, practitioners response to new ideas by taking one measurement and presenting the resulting data at a conference and saying it’s been validated by academics. New metrics must be put into context, and often this isn’t done.

In addition, it’s essential to take enough measurements to see whether the cost and benefits are in proportion and to measure the cost and benefits of the measurement itself. An organisation obsessed with password strength had logs it never looked at – and so they never spotted three hacking attempts on 100,000 accounts. In secure organisations, access has to be audited – but organisations refuse to buy the tools to make auditing humanly possible. In one organisation, they found eight years’ worth of logs that had never been looked at.

In one case, a high street bank, wanting to improve the security awareness of its corporate and SME customers, conducted roadshow events and webinars. But its only metric was the number of people who attended – nothing about whether attendees took the advice, changed anything, or felt better equipped. When Sasse’s group proposed asking attendees to fill out a short questionnaire, the bank refused even to let them ask the industry sector and size of company, saying it would breach client confidentiality. The researchers attended seven in-person events, and found they were probably too long and that the opening, in which a former police officer condemned many common practices, caused younger people to tune out. They tried to set up an anonymous interview system, but found the bank always had reasons why a particular event couldn’t be followed up and wouldn’t allow the researchers to write an explanation that could be showed to the clients. Meantime, businesses were being hit by payment scams in which a criminal would call a company on a Friday afternoon, use information obtained by social engineering, and get an authorisation code for payments of £50,000 up to seven-figure sums. The bank had taken a recording of such a call, and had actors reenact it. Everyone was gripped by this and trying to understand it and the subsequent deconstruction – but the bank then lost them again by following it with generic recommendations that had no relation to preventing the scam. The researchers’ conclusion: despite the resources and time being spent, the bank’s approach was mostly knee-jerk actions. Sasse also noted that groups like SANS and The Analogies Project are prone to sharing “craft knowledge” that has no scientific basis or evaluation, leading the field around in a circle.

Related posts:
Communicating with the board: Workshop summary
Communicating with the board:The problem with vendors
Communicating with the board: Assessing and managing risk

Call for Proposals – Supporting the Board

Communicating with the board: workshop summary

This is the summary of a June 2017 RISCS/NCSC workshop that explored supporting boards in making decisions about cyber security in June 2017. It led directly to the research call that closes December 1, 2017. The day was broken up into three sessions, which are summarised in related posts in greater detail: Setting the scene; The problem with vendors; Assessing and managing cyber risk.

To explore ideas for resolving the difficulties of communication between security practitioners and the boards and C-suites of the companies they work for, Rachel C asked the Practitioners Panel meeting to focus on four main questions:

  • What would help the board to better manage cyber risk?
  • What disciplines and approaches could help us solve this problem?
  • What’s wrong with current approaches?
  • What information does the board need, and what information do security practitioners want to report?

During the rest of the day – which included three presentations, an interactive discussion, a panel discussion, and time to collect ideas on four whiteboards, some main points and common themes emerged:

  • Metrics provide a straightforward mechanism for reporting, but typically count things that are easy to quantify and do not reflect the wider business or security context. If employees go through anti-phishing training and click on fewer links, is the company better protected or losing customers?
  • Linguistic and cultural differences make communication difficult. Boards pay better attention to “business risk” than “cyber risk”. Most boards underestimate the scale of the vulnerability.
  • Boards are interested in value and risk, but these are poorly captured by current methods of assessing security. As a result, boards may make purely rational decisions that other risks are existentially more important to the company – the political risk of port closures for a shipping company, grounded planes for an airline. In larger companies, it can be difficult to get cyber risk through the risk committee and onto the board’s time-limited agenda; understanding internal power structures and politics may help with this.
  • Non-executive directors who sit across many boards often have greater understanding across topic areas than the C-suite, whose members focus on single areas of the business.
  • Board members, who have varying levels of sophistication, need the confidence to ask “stupid questions”; sometimes the ones who know less ask more probing questions and extract better information. However, it may be difficult for them to accurately evaluate the answers they get – how do they know a CISO is good at his job? Board members are not stupid, though security practitioners sometimes treat them that way: they are streetwise, smart, and willing to take some risks, or they wouldn’t be on a board. However, keeping it simple has benefits, since in so many cases the root cause is the same half-dozen things that are being missed.
  • Cyber security is often seen as an “IT issue” unconnected to big changes such as acquisitions that leave behind partially integrated systems or outsourced calls centres that can’t manage the risks associated with customer data., or accidental compromise by insiders, which may be a bigger issue than intentional compromise.
  • It is difficult to generalize across sectors (or within the demographics of SMEs, from earlier research by Simon Parkin), because what’s important to businesses in different sectors varies widely, and each organisation is set up differently. Identifying the audience is crucial.
  • Historically, companies that have been hacked or that have experienced a data breach have survived: Sony, Target, and TalkTalk are all still trading despite some reputational damage. Nonetheless, board members may respond to the idea that “our customers are being hacked” and the potential for damage to their personal reputations.
  • Ideas from other industries and disciplines concerned with safety and human behaviour could be helpful: aviation, polling, advertising, marketing, gambling, military intelligence.

Related posts:
Communicating with the board: Setting the scene
Communicating with the board: The problem with vendors
Communicating with the board: Assessing and managing risk

Call for Proposals – Supporting the Board

Communicating with the board: the problem with vendors

This is the second session of a June 2017 RISCS/NCSC workshop that explored supporting boards in making decisions about cyber security. It led directly to the research call that closes December 1, 2017. The day, which is summarised in full here, was broken up into three sessions. The other two are Setting the scene and How do boards assess and manage cyber risk?.

The second discussion began with the question: why are security firms so bad and why is the market so broken? Angela Sasse blamed the lack of scientific validity. Security can’t be done well using off-the-shelf products, and a lot of companies don’t want to hear that. Tim Roberts added that because it’s a hot topic there are a lot of small vendors who think it’s easy and offer generic one-size-fits-all advice. The same level of cultural change the financial industry needed after the crash is what’s needed here, but that’s expensive and hard. Jones said, “There are too many security products and not enough secure products.”

Boards typically think in terms of value: how much does security cost, what is the benefit in relation to that cost, and how does cyber security risk compare to other risks the company faces? It’s a significant problem, however, to explain what money buys in cyber security: “you’re getting more nothing” or “you’re buying peace of mind” don’t indicate value.

However, measuring cyber security in terms of financial value poses other problems, as Rachel pointed out. A company may rationally decide to self-insure against attacks rather than improve security or create a war chest to cover them rather than spend money and effort trying to avert attacks they no longer believe they can beat anyway. Assigning value is difficult in any case, especially when compared to other existential risks such as, for a shipping company, the closure of the Suez Canal or, for an airline, grounded planes. Particularly for monopoly companies, the ongoing damage from a cyber incident may be quite short-term; in such cases embarrassment is the main leverage. Roberts noted a joint report McKenzie did with the LSE that found the biggest explainer of the huge variation in appreciation of cyber security risk was the sector the organisation operated in. In a sector where reliability and customer trust are crucial, such as private and corporate banking, a single attack could cause a much greater market drop than the cost of the attack that requires years for recovery. In other sectors an outage might just be seen as an ordinary cost of doing business.

Susan asked if instead of personal reputation might be a motivator for boards. Roberts noted that given a personal connection even small attacks can hit hard.

Sasse expressed concern that laying off the risk by buying cyber insurance could become the next PPI, in that insurance companies don’t have sufficient actuarial data on which to base pricing, and organisations think they can offload the risk when they really can’t or that problems are covered that really aren’t. Roberts suggested that insurance is a distraction, and that the risk is too unpredictable to be standardised. Network externalities mean that the financial damage of a cyber incident may not be felt by the company that had the break-in but passed down the chain. This includes the time and effort involved in replacing a supplier a customer can no longer trust, whether it’s an individual having to replace a broadband ISP or a company having to find a new shipping service. In addition, because everyone is interconnected in the digital economy it’s hard to set priorities for whom to help but the resources aren’t there to help everyone. It’s become very difficult to say which sectors are critical for the UK nationally; a company like Sainsbury’s holds three days’ worth of stock in each store and would run out of food in about five days if its network went down.

A commonly suggested approach is using the analogy of health and safety. However, implementing changes to aid health and safety may be expensive in the first year but require only a trickle of funds for maintenance in subsequent years. Cyber security’s spending pattern can’t be turned into a one-time upgrade.

Security practitioners have very few positive signals to show that a business is doing the right thing, unlike other risk areas. Cyber Essentials was mentioned as an exception that helps some parts of a business, but it’s only a first step.

One of the unexplored areas of cyber security is IP-enabled objects. Maersk, for example, has hundreds of IP-enabled ships; the industry’s standards organisation now requires the company to do a cyber security assessment for each one.

Siloing was also raised as an issue: security began as an isolated technical area and still needs more integration into the rest of the business. This happens with other risks – for example, food safety is thoroughly embedded throughout Morrison’s culture, where no one who hadn’t been trained for the fish counter would agree to serve there even temporarily, but this doesn’t happen with security. One result is that security practitioners treat board members as if they were stupid, prefacing sentences with “You should”. One way of communicating the risks directly is to (with permission) run pentests on board members, to show how personally vulnerable they are as a way of opening the conversation.

Rowena Fell commented that in order to have resonance in the marketplace research must define the word “security” at the outset because it’s used to mean many different things.

Geraint Price ended the discussion with three comments:

  • Badging as currently practised appears to be a standards tickbox compliance exercise, not a measurement;
  • The meaning of “value” has never been well explained and it’s not clear whether it can be measured;
  • We talk about communication from the IT security to the board, but there is no translation of the board’s use of “risk appetite” to IT security.

Related posts:
Communicating with the board: Workshop summary
Communicating with the board: Setting the scene
Communicating with the board: Assessing and managing risk

Call for Proposals – Supporting the Board