Crossing the streams: Lizzie Coles-Kemp

Lizzie Coles-Kemp

Lizzie Coles-Kemp, deputy director of RISCS

A key goal of RISCS is to approach security from myriad angles. Among RISCS researchers are psychologists and human-computer interaction specialists, as well as representatives of more traditional disciplines such as mathematics and computer science. RISCS deputy director, Royal Holloway professor Lizzie Coles-Kemp, represents multiple disciplines all by herself.

This contention is easily borne out by just a small selection of Coles-Kemp’s work. For RISCS1, she led Cyber Security Cartographies (CySeCa), which compared social information sharing and network data traffic flows within an organisation to find gaps. She also led the visualisation work package in Technology-supported Risk Estimation by Predictive Assessment of Socio-technical Security (TREsPASS), which built an “attack navigator” to enable organisations to help security practitioners determine which attack opportunities are possible, which attacks are the most urgent to understand, and which countermeasures are most effective. For TREsPASS, Coles-Kemp’s team included a design critic and academic, an interactive design team, an artist, and three mathematicians. Together, they developed visualisations that reflected the work produced by the mathematical modeling and risk algorithm teams.

Coles-Kemp’s publications are equally multi-disciplinary. Her 2013 paper Granddaughter beware! An intergenerational case study of managing trust issues in the use of Facebook is a sociological study of privacy discussions between pairs of grandmothers and granddaughters and reveals the roles families and tools play in determining trust practices. The 2014 paper Watching You Watching Me: The Art of Playing the Panopticon, written with Alf Zugenmaier and Makayla Lewis, studied the impact of the monitoring and surveillance functionality built into many public services intended to protect the vulnerable. The researchers found that prioritising securing and monitoring the system makes the services’ users feel more insecure, and hinders the delivery of digital services. They concluded by arguing that such services must be designed to support the social networks their users interact with.

In a 2016 article with fellow TREsPASS member René Rydhof Hansen, Everyday Security: A Manifesto for New Approaches to Security Modelling Coles-Kemp argues that because people need both to produce and share information and to protect it in order to feel safe and secure, modelling everyday security is particularly complex. For this reason, a family of models is required to articulate people’s everyday security needs. Finally, in a paper written with Debi Ashenden, professor of cyber security at the University of Portsmouth and the lead for protective security and risk at the Centre for Research and Evidence on Security Threats (CREST) and presented at the 2017 Academic Archers conference, Coles-Kemp and Ashenden dispute the frequently-made assertion that social media are absent from the fictional world of the BBC’s long-running radio soap opera, The Archers, and explore what the show’s characters and their world can tell us about what security means to people in their everyday lives.

The path that led to this unusual approach to security began with a humanities degree in Scandinavian studies and linguistics from the University of Hull. After working briefly in theatre administration, an office temp job led Coles-Kemp to Uniplex, a software company that made a Unix equivalent of Microsoft Office. When the Swedish military needed a secure version of the software, Coles-Kemp’s fluent Swedish meant she was drafted in from training to help with porting and translating it.

Getting it to work on a secure platform was a complex job that piqued Coles-Kemp’s interest: “I got heavily involved with understanding how the secure version of the operating system was designed.”

Coles-Kemp believes that the fact that she only spoke about security in Swedish for the first few years has influenced how she thinks about the subject to this day.

“Linguistically, it does frame how you understand the concepts, particularly structure. When you’re talking about access control in Swedish it’s a different logic than when you talk about it in Anglo-Saxon languages,” she says. Partly, this is because the same word, “säkerhet”, can apply to both safety and security. Plus, “In the Scandinavian view of the world there is often a much more socio-technical bent for thinking about security. It’s a tradition that goes back to the 1970s and the early Scandinavian thinking about software design and interaction.” She went on to work for Dynasoft, a Swedish software house producing Unix access control products, which by the mid-1990s meant smart cards and a forerunner of public key infrastructure. Coles-Kemp ran Dynasoft’s UK subsidiary, winning the 1997 Oxfordshire Business Woman of the Year award.

In 1997, after the company was sold to Security Dynamics (later RSA Security), she became the security manager for the British Council and began an MSc at Royal Holloway. The former showed her that no two risk assessments worked the same way. As a result, “I became very interested in how organisational security processes work, what makes a risk assessment or audit process effective, and what ‘effective’ is.” She focused on these issues for her PhD at King’s College London, still very much a practitioner when she finished it in 2008. Her contemporaneous work for Lloyds Register Quality Assurance (LRQA) focused on ISO 27001 security management assessment for a wide range of organisations including one of the private hospital chains.

“Health care is fascinating because the need for clinical governance is completely enmeshed with security governance. You have to think about security from the perspective of the clinical, and information-sharing needs change as the patient’s condition changes.”

Her academic career began in 2005, when she began teaching undergraduates part-time at Royal Holloway; she moved to full-time in 2008. On arrival, she applied to participate in a “sandpit” run by the Engineering and Physical Sciences Research Council (EPSRC), the Economic and Social Research Council (ESRC), and the Technology Strategy Board. Coles-Kemp was part of a successful funding bid that emerged from this five-day immersive environment in which researchers collaborated on developing research questions, forming new teams, and preparing proposals. Led by Coles-Kemp, Visualisation and Other Methods of Expression (VOME) studied why people share what they do online and what they view as protection. Her remit: cover under-served communities. In partnership with Ashenden and Alison Adams, the Universities of Salford and Cranfield, the consultancy Consult Hyperion, and Sunderland City Council, Coles-Kemp worked directly with hard-to-reach communities such as the long-term unemployed in socio-economically deprived areas. In that environment, traditional research tools like focus groups and surveys were little help; new methods were needed

“We weren’t understanding what was of interest to those communities about data sharing because we were making all sorts of assumptions about what was important to them, and we had to get that out of the way to really understand data sharing in this context.”

For example, in these communities, few imagined they had much realistic chance of employment – so the risk that what they posted online might damage those prospects was meaningless. Similarly, in families who have been physically close for generations it often made more sense, for both safety and security, to share passwords. Coles-Kemp often heard, “We share a lot of other stuff.” The result was, “We got close enough to the communities to understand that it’s not that clear-cut, and we have to think about the overall safety and security of the individual within the family unit.”

Cartoon of Lizzie Coles-Kemp

Lizzie Coles-Kemp, drawn by Makayla Lewis

Their solution happened almost by accident. In VOME’s first year, ESRC offered a bursary to take part in a festival of social science. The VOME group partnered with the theatre company Bimbilibausa, led by clown Freya Stang, to present a short play about privacy choices in the workplace based on their research to date. The group took the play to Sunderland and invited the participants they had worked with to use the council’s voting paddles to select the story’s privacy outcome. Because whole families attended, the play led to intergenerational conversations about privacy and a meta-narrative that showed Coles-Kemp’s team the value of creative engagement techniques. The results encouraged Coles-Kemp to continue working with researchers and artists to develop a range of creative methods, including story sheets and Legos, to create three to four provocations or open questions that then let them drill down into individual issues. This work led to the grandmother-granddaughter paper, developed the understanding that led the work for the panopticon paper, revealed the complexity of everyday security and therefore the need for a family of information security models, and highlighted the importance of community and family interactions such as dominates narratives such as those found in The Archers when regulating the flow of information.

Creative engagement methods have both utility to the participant communities and methodological value. A further study, funded by the Arts and Humanities Research Council (AHRC), focused on families separated by prison sentences with the goal of understanding why they didn’t engage with the support services provided to them. In this case, the families proved to be more interested in talking about the journeys involved in prison visiting. “We went with that, figuring that if support services were important that would manifest itself,” Coles-Kemp says. The group worked with one of the Northeast England prisons to develop questions and create a large wall collage that is still in use as part of rehabilitation training when offenders are set to leave prison as well as a series of story cubes which form part of visitor induction to help families understand the kinds of issues that will confront them and introduce the support that’s available.

The creative engagement described here – story cubes, collages, drawings, Lego building – remains part of Coles-Kemp’s practice. CySeCa’s researchers, for example, included Makayla Lewis, who used her sketch noting and HCI and User Experience expertise to create cartoons based on interviews with security practitioners. These were then used to initiate discussions that exposed the information flows among people; the results were then compared to the results of network traffic analysis to find policy conflicts and gaps. In September 2016, Coles-Kemp started a five-year, EPSRC-funded fellowship programme to develop these techniques in conjunction with wider political and sociological theories of security in order to design and evaluate alternative approaches to securing digital services. Her work in this programme focuses on essential public services including welfare, health, housing, employment, education, and criminal justice. Coles-Kemp will continue to work with academic and practitioner communities in RISCS to both develop and disseminate these theoretical frameworks, practical techniques, and expertise.

The secondary questions security gap

Angela Sasse at CPDP2017

Angela Sasse at CPDP2017

The BBC reports that a common pastime on Facebook, comparing users’ top ten concerts, may present a security risk. The reason lies in the secondary security questions many websites use as fallback measures to identify users who have forgotten their passwords. Among the standard questions websites prompt users to provide answers for are the first gig you attended, your mother’s maiden name, your favourite movie, or the name of your first pet,

Quoted in the story, RISCS director and UCL professor Angela Sasse notes that it’s fairer to blame the sites for security breaches than individuals, arguing that using information that may be publicly available violates good security principles. In the past, similar stories have surfaced in the past relating to other social media trends, such as posting your “porn name” – which is typically made up of the name of your first pet coupled with the name of the street you grew up on.

Sasse told the BBC, “The risk is not so much publishing these lists, rather that somebody thinks it is a good idea to use questions like that as security credentials.”

An ancillary problem is that many sites ask the same questions, and in case of a data breach those answers can be used to gain access to other accounts the user holds.

At the National Cyber Security Centre blog, Kate R expands on how site owners and developers might manage these security questions so they leave less of a gap in security. First, she says, try to find alternatives. If that’s not possible, avoid questions with easily guessable answers that attackers can exploit. Dynamic questions, which depend on answers generated from data sites already hold may be a more secure choice than static questions if the pool of possible answers is large enough. Consider whether users can remember the answers they give, whether they are likely to use the same answers elsewhere, and how much effort the system will require of users.

Steven J. Murdoch

Steven J. Murdoch

On the Bentham’s Gaze blog, UCL Royal Society University Research Fellow Steven J. Murdoch expands on the theme that companies should stop passing the buck to consumers. In a discussion of standard security advice that’s unfit for the real world, he provides some useful advice. For example, he says password re-use across sites is a bigger problem than choosing passwords that are simple enough to remember; he recommends remembering unique passwords for the few most important sites, such as banking and email, and using a password manager for the rest. Similarly, although security experts typically tell users not to write down or share their passwords, this is poor advice within the context of a family, where doing so can be important. Murdoch goes on to discuss the difficulties of giving good security advice when individuals have so little control over the quality of the security measures imposed on them by others such as banks, lenders, mobile phone handset manufacturers, and so on.

Modeling and analysis of influence power for information security decisions

Iryna Yevseyeva, Charles Morisset  and Aad van Moorsel


Users of computing systems and devices frequently make decisions related to information security, e. g., when choosing a password, deciding whether to log into an unfamiliar wireless network. Employers or other stakeholders may have a preference for certain outcomes, without being able to or having a desire to enforce a particular decision. In such situations, systems may build in design nudges to influence the decision making, e. g., by highlighting the employer’s preferred solution. In this paper we model influencing information security to identify which approaches to influencing are most effective and how they can be optimized. To do so, we extend traditional multi-criteria decision analysis models with modifiable criteria, to represent the available approaches an influencer has for influencing the choice of the decision maker. The notion of influence power is introduced to characterize the extent to which an influencer can influence decision makers. We illustrate our approach using data from a controlled experiment on techniques to influence which public wireless network users select. This allows us to calculate influence power and identify which design nudges exercise the most influence over user decisions.

Date: April 2016
Published: Performance Evaluation An International Journal Volume 9898                                                                                                                                                                                    Publisher: Elsevier
Publisher URL: 
Full text:                                                                                                                                               DOI:

Personality and Social Framing in Privacy Decision-Making: A Study on Cookie Acceptance

Lynne M. Coventry, Debora Jeske, John M. Blythe, James Turland and Pam Briggs


Despite their best intentions, people struggle with the realities of privacy protection and will often sacrifice privacy for convenience in their online activities. Individuals show systematic, personality dependent differences in their privacy decision making, which makes it interesting for those who seek to design ‘nudges’ designed to manipulate privacy behaviors. We explore such effects in a cookie decision task. Two hundred and ninety participants were given an incidental website review task that masked the true aim of the study. At the task outset, they were asked whether they wanted to accept a cookie in a message that either contained a social framing ‘nudge’ (they were told that either a majority or a minority of users like themselves had accepted the cookie) or contained no information about social norms (control). At the end of the task, participants were asked to complete a range of personality assessments (impulsivity, risk-taking, willingness to self-disclose and sociability). We found social framing to be an effective behavioral nudge, reducing cookie acceptance in the minority social norm condition. Further, we found personality effects in that those scoring highly on risk-taking and impulsivity were significantly more likely to accept the cookie. Finally, we found that the application of a social nudge could attenuate the personality effects of impulsivity and risk-taking. We explore the implications for those working in the privacy-by-design space.

Date: 7 September 2016
Published: Frontiers in Psychology 7 Article 1341: 1-12                                                                                              Publisher: Frontiers Research Foundation
Full Text:                                                                DOI:

Two-stage Security Controls Selection

Iryna Yevseyeva, Vitor Basto Fernandes, Aad van Moorsel, Helge Janicke and Michael Emmerich


To protect a system from potential cyber security breaches and attacks, one needs to select efficient security controls, taking into account technical and institutional goals and constraints, such as available budget, enterprise activity, internal and external environment. Here we model the security controls selection problem as a two-stage decision making: First, managers and information security officers define the size of security budget. Second, the budget is distributed between various types of security controls. By viewing loss prevention with security controls measured as gains relative to a baseline (losses without applying security controls), we formulate the decision making process as a classical portfolio selection problem. The model assumes security budget allocation as a two objective problem, balancing risk and return, given a budget constraint. The Sharpe ratio is used to identify an optimal point on the Pareto front to spend the budget. At the management level the budget size is chosen by computing the trade-offs between Sharpe ratios and budget sizes. It is shown that the proposed two-stage decision making model can be solved by quadratic programming techniques, which is shown for a test case scenario with realistic data.

Date: 2016
Published: Procedia Computer Science Volume 100, 2016, pages 971 -97                                                      Publisher: Elsevier
Publisher URL:                                                 Full text:                                                                                                                                                 DOI:


Exploring the relationship between impulsivity and decision-making on mobile devices

Debora Jeske, Pam Brigg and Lynne Coventry


Mobile devices offer a common platform for both leisure and work-related tasks, but this has resulted in a blurred boundary between home and work. In this paper, we explore the security implications of this blurred boundary, both for the worker and the employer. Mobile workers may not always make optimal security-related choices when “on the go” and more impulsive individuals may be particularly affected as they are considered more vulnerable to distraction. In this study, we used a task scenario, in which 104 users were asked to choose a wireless network when responding to work demands while out of the office. Eye-tracking data was obtained from a subsample of 40 of these participants in order to explore the effects of impulsivity on attention. Our results suggest that impulsive people are more frequent users of public devices and networks in their day-to-day interactions and are more likely to access their social networks on a regular basis. However, they are also likely to make risky decisions when working on-the-go, processing fewer features before making those decisions. These results suggest that those with high impulsivity may make more use of the mobile Internet options for both work and private purposes, but they also show attentional behavior patterns that suggest they make less considered security-sensitive decisions. The findings are discussed in terms of designs that might support enhanced deliberation, both in the moment and also in relation to longer term behaviors that would contribute to a better work–life balance.

Date: August 2016
Published:  Personal and Ubiquitous Computing Volume 20 (Issue 4) pp. 545-557                                                     Publisher: Springer
Publisher URL:                                                  Full Text:                                                                                                                                          DOI:

Combining Qualitative Coding and Sentiment Analysis: Deconstructing Perceptions of Usable Security in Organisations

Ingolf Becker, Simon Parkin and M. Angela Sasse


Background: A person’s security behavior is driven by underlying mental constructs, perceptions and beliefs. Examination of security behavior is often based on dialogue with users of security, which is analysed in textual form by qualitative research methods such as Qualitative Coding (QC). Yet QC has drawbacks: security issues are often time-sensitive but QC is extremely time consuming. QC is often carried out by a single researcher raising questions about the validity and repeatability of the results. Previous research has identified frequent tensions between security and other tasks, which can evoke emotional responses. Sentiment Analysis (SA) is simpler to execute and has been shown to deliver accurate and repeatable results. / Aim: By combining QC with SA we aim to focus the analysis to areas of strongly represented sentiment. Additionally we can analyse the variations in sentiment across populations for each of the QC codes, allowing us to identify beneficial and harmful security practises.                                                                                                                                                                                     Method: We code QC-annotated transcripts independently for sentiment. The distribution of sentiment for each QC code is statistically tested against the distribution of sentiment of all other QC codes. Similarly we also test the sentiment of each QC code across population subsets. We compare our findings with the results from the original QC analysis. Here we analyse 21 QC treated interviews with 9 security specialists, 9 developers and 3 usability experts, at 3 large organisations claiming to develop ‘usable security products’. This combines 4983 manually annotated instances of sentiment with 3737 quotations over 76 QC codes.                                                                                                                                                                                      Results: The methodology identified 83 statistically significant variations (with p < 0.05). The original qualitative analysis implied that organisations considered usability only when not doing so impacted revenue; our approach finds that developers appreciate usability tools to aid the development process, but that conflicts arise due to the disconnect of customers and developers. We find organisational cultures which put security first, creating an artificial trade-off for developers between security and usability.                                                                      Conclusions: Our methodology confirmed many of the QC findings, but gave more nuanced insights. The analysis across different organisations and employees con- firmed the repeatability of our approach, and provided evidence of variations that were lost in the QC findings alone. The methodology adds objectivity to QC in the form of reliable SA, but does not remove the need for interpretation. Instead it shifts it from large QC data to condensed statistical tables which make it more accessible to a wider audience not necessarily versed in QC and SA.

Date: 26 May 2016
Published: The LASER Workshop: Learning from Authoritative Security Experiment Results.                     Publisher: IEEE
Publisher URL:
Full Text:




Towards robust experimental design for user studies in security and privacy

Kat Krol, Jonathan M. Spring, Simon Parkin and M. Angela Sasse


Background: Human beings are an integral part of computer security, whether we actively participate or simply build the systems. Despite this importance, understanding users and their interaction with security is a blind spot for most security practitioners and designers. / Aim: Define principles for conducting experiments into usable security and privacy, to improve study robustness and usefulness. / Data: The authors’ experiences conducting several research projects complemented with a literature survey. Method: We extract principles based on relevance to the advancement of the state of the art. We then justify our choices by providing published experiments as cases of where the principles are and are not followed in practice to demonstrate the impact. Each principle is a discipline specific instantiation of desirable experiment-design elements as previously established in the domain of philosophy of science. / Results: Five high-priority principles – (i) give participants a primary task; (ii) incorporate realistic risk; (iii) avoid priming the participants; (iv) perform doubleblind experiments whenever possible and (v) think carefully about how meaning is assigned to the terms threat model, security, privacy, and usability. / Conclusion: The principles do not replace researcher acumen or experience, however they can provide a valuable service for facilitating evaluation, guiding younger researchers and students, and marking a baseline common language for discussing further improvements.

Date: 26 May 2016
Published: The LASER Workshop: Learning from Authoritative Security Experiment Results.                      Publisher: IEEE
Publisher URL:
Full Text:                                                       

“I don’t like putting my face on the Internet!”: An acceptance study of face biometrics as a CAPTCHA replacement

Kat Krol, Simon Parkin and M. Angela Sasse


Biometric technologies have the potential to reduce the effort involved in securing personal activities online, such as purchasing goods and services. Verifying that a user session on a website is attributable to a real human is one candidate application, especially as the existing CAPTCHA technology is burdensome and can frustrate users. Here we examine the viability of biometrics as part of the consumer experience in this space. We invited 87 participants to take part in a lab study, using a realistic ticket-buying website with a range of human verification mechanisms including a face biometric technology. User perceptions and acceptance of the various security technologies were explored through interviews and a range of questionnaires within the study. The results show that some users wanted reassurance that their personal image will be protected or discarded after verifying, whereas others felt that if they saw enough people using face biometrics they would feel assured that it was trustworthy. Face biometrics were seen by some participants to be more suitable for high-security contexts, and by others as providing extra personal data that had unacceptable privacy implications.

Date: 26 May 2016
Published: Identity, Security and Behavior Analysis (ISBA), 2016 IEEE International Conference on
Publisher: IEEE
Publisher URL:
Full Text:                                                                                         DOI:

Applying Cognitive Control Modes to Identify Security Fatigue Hotspots

Simon Parkin, Kat Krol, Ingolf Becker and M. Angela Sasse


Security tasks can burden the individual, to the extent that security fatigue promotes had security habits. Here we revisit a series of user-centred studies of security mechanisms as part of regular routines, such as two-factor authentication. These studies inform reflection upon the perceived contributors and consequences of fatigue, and strategies that a person may adopt in response to feeling overburdened by security. The fatigue produced by security tasks is then framed using a model of cognitive control modes, which explores human performance and error. Security tasks are then considered in terms of modes such as unconscious routines and knowledge-based ad-hoc approaches. Conscious attention can support adaptation to novel security situations, but is error-prone and tiring; both simple security routines and technology-driven automation can minimise effort, but may miss cues from the environment that a nuanced response is required.

Date: 22 June 2016
Published: SOUPS Workshop on Security Fatigue                                                                                                           Publisher: USENIX
Publisher URL:
Full Text: