Observing the WannaCry fallout: confusing advice and playing the blame game

As researchers who strive to develop effective measures that help individuals and organisations to stay secure, we have observed the public communications that followed the Wannacry ransomware attack of May 2017 with increasing concern. As in previous incidents, many descriptions of the attack are inaccurate – something colleagues have pointed out elsewhere. Our concern here is the advice being disseminated, and the fact that various stakeholders seem to be more concerned with blaming each other than with working together to prevent further attacks affecting organisations and individuals.

countries affected by wannacry

Countries initially affected by WannaCry. From Wikimedia Commons (user:Roke).

Let’s start with the advice that is being handed out. Much of it is unhelpful at best, and downright wrong at worst – a repeat of what happened after Heartbleed, when people were advised to change their passwords before the affected organisations had patched their SSL code. Here is a sample of real advice sent out to staff in major organisation post-WannaCry:

“We urge you to be vigilant and not to open emails that are unexpected, unusual or suspicious in any way. If you experience any unusual computer behaviour, especially any warning messages, please contact your IT support immediately and do not use your computer further until advised to do so.”

Useful advice has to be correct and actionable. Users have to cope with dozens, maybe hundreds, of unexpected emails every day, most containing links and many accompanied by attachments, cannot take ten minutes to ponder each email before deciding whether to respond. Such instructions also implicitly and unfairly suggest that users’ ordinary behaviour plays a major role in causing major incidents like this one. RISCS advocates enlisting users as part of frontline defence. Well-targeted, automated blocking of malicious emails lessen the burden on individual users, and build resilience for the organisation in general.

In an example of how to confuse users, The Register reports that City of London Police sent out its “advice” via email in an attachment entitled “ransomware.pdf”. So users are simultaneously exhorted to be “vigilant” and not open emails and required to open an email in order to get that advice. The confusion resulting from contradictory advice is worse than the direct consequences of the attack: it enables future attacks. Why play Keystone Cyber Cops when UK National Technical Authority for such matters, the National Centre for Cyber Security, offers authoritative and well-presented advice on their website?
.
Our other concern is the unedifying squabbling between spokespeople for governments and suppliers blaming each other for running unsupported software, not paying for support, charging to support unsupported software, and so on, with and security experts weighing in on all sides. To a general public already alarmed by media headlines, finger-pointing creates little confidence that either party is competent or motivated to keep secure the technology on which our lives all now depend. When the supposed “good guys” expend their energy fighting each other, instead of working together to defeat the attackers, it’s hard to avoid the conclusion that we are most definitely doomed.. As Columbia University professor Steve Bellovin writes, the question of who should pay to support old software requires broader collaborative thought; in avoiding that debate we are choosing to pay as a society for such security failures.

We would refer those looking for specific advice on dealing with ransomware to the NCSC guidance, which is offered in separate parts for SMEs and home users and enterprise administrators.

Much of NCSC’s advice is made up of things we all know: we should back up our data, patch our systems, and run anti-virus software. Part of RISCS’ remit is to understand why users often don’t follow this advice. Ensuring backups remain uninfected is, unfortunately, trickier than it should be. Ransomware will infect – that is, encrypt – not only the machine it’s installed on but any permanently-connected physical or network drive. This problem ought to be solved by cloud storage, but it can be difficult to find out whether cloud backups will be affected by ransomware, and technical support documentation often simply refers individuals to “your IT support”, even though vendors know few individuals have any. Dropbox is unusually helpful, and provides advice on how to recover from a ransomware attack and how far it can help. Users should be encouraged to read such advice in advance and factor it into backup plans.

There are many reasons why people do not update their software. They may, for example, have had bad experiences in the past that lead them to worry that security updates will fail or leave their system damaged, or incorporate unwanted changes in functionality. Software vendors can help here by rigorously testing updates and resisting the temptation to bundle in new features. IT support staff can help by doing their own tests that allow them to reassure their users that they will help resolve any resulting problems in a timely manner.

In some cases, there are no updates to install. The WannaCry ransomware attack highlighted the continuing use of desktop Windows XP, which Microsoft stopped supporting with security updates in 2014. A few organisations still pay for special support contracts, and Microsoft made an exception for WannaCry by releasing a security patch more widely. Organisations that still have XP-based systems should now investigate to understand why equipment using an unsafe, outdated operating system is still in use. Ideally, the software should be replaced with a more modern system; if that’s not possible the machine should be isolated from network connections. No amount of reminding users to patch their systems or telling them to “be vigilant” will be effective in such cases.

This article also appears on the Bentham’s Gaze blog.

Crossing the streams: Lizzie Coles-Kemp

Lizzie Coles-Kemp

Lizzie Coles-Kemp, deputy director of RISCS

A key goal of RISCS is to approach security from myriad angles. Among RISCS researchers are psychologists and human-computer interaction specialists, as well as representatives of more traditional disciplines such as mathematics and computer science. RISCS deputy director, Royal Holloway professor Lizzie Coles-Kemp, represents multiple disciplines all by herself.

This contention is easily borne out by just a small selection of Coles-Kemp’s work. For RISCS1, she led Cyber Security Cartographies (CySeCa), which compared social information sharing and network data traffic flows within an organisation to find gaps. She also led the visualisation work package in Technology-supported Risk Estimation by Predictive Assessment of Socio-technical Security (TREsPASS), which built an “attack navigator” to enable organisations to help security practitioners determine which attack opportunities are possible, which attacks are the most urgent to understand, and which countermeasures are most effective. For TREsPASS, Coles-Kemp’s team included a design critic and academic, an interactive design team, an artist, and three mathematicians. Together, they developed visualisations that reflected the work produced by the mathematical modeling and risk algorithm teams.

Coles-Kemp’s publications are equally multi-disciplinary. Her 2013 paper Granddaughter beware! An intergenerational case study of managing trust issues in the use of Facebook is a sociological study of privacy discussions between pairs of grandmothers and granddaughters and reveals the roles families and tools play in determining trust practices. The 2014 paper Watching You Watching Me: The Art of Playing the Panopticon, written with Alf Zugenmaier and Makayla Lewis, studied the impact of the monitoring and surveillance functionality built into many public services intended to protect the vulnerable. The researchers found that prioritising securing and monitoring the system makes the services’ users feel more insecure, and hinders the delivery of digital services. They concluded by arguing that such services must be designed to support the social networks their users interact with.

In a 2016 article with fellow TREsPASS member René Rydhof Hansen, Everyday Security: A Manifesto for New Approaches to Security Modelling Coles-Kemp argues that because people need both to produce and share information and to protect it in order to feel safe and secure, modelling everyday security is particularly complex. For this reason, a family of models is required to articulate people’s everyday security needs. Finally, in a paper written with Debi Ashenden, professor of cyber security at the University of Portsmouth and the lead for protective security and risk at the Centre for Research and Evidence on Security Threats (CREST) and presented at the 2017 Academic Archers conference, Coles-Kemp and Ashenden dispute the frequently-made assertion that social media are absent from the fictional world of the BBC’s long-running radio soap opera, The Archers, and explore what the show’s characters and their world can tell us about what security means to people in their everyday lives.

The path that led to this unusual approach to security began with a humanities degree in Scandinavian studies and linguistics from the University of Hull. After working briefly in theatre administration, an office temp job led Coles-Kemp to Uniplex, a software company that made a Unix equivalent of Microsoft Office. When the Swedish military needed a secure version of the software, Coles-Kemp’s fluent Swedish meant she was drafted in from training to help with porting and translating it.

Getting it to work on a secure platform was a complex job that piqued Coles-Kemp’s interest: “I got heavily involved with understanding how the secure version of the operating system was designed.”

Coles-Kemp believes that the fact that she only spoke about security in Swedish for the first few years has influenced how she thinks about the subject to this day.

“Linguistically, it does frame how you understand the concepts, particularly structure. When you’re talking about access control in Swedish it’s a different logic than when you talk about it in Anglo-Saxon languages,” she says. Partly, this is because the same word, “säkerhet”, can apply to both safety and security. Plus, “In the Scandinavian view of the world there is often a much more socio-technical bent for thinking about security. It’s a tradition that goes back to the 1970s and the early Scandinavian thinking about software design and interaction.” She went on to work for Dynasoft, a Swedish software house producing Unix access control products, which by the mid-1990s meant smart cards and a forerunner of public key infrastructure. Coles-Kemp ran Dynasoft’s UK subsidiary, winning the 1997 Oxfordshire Business Woman of the Year award.

In 1997, after the company was sold to Security Dynamics (later RSA Security), she became the security manager for the British Council and began an MSc at Royal Holloway. The former showed her that no two risk assessments worked the same way. As a result, “I became very interested in how organisational security processes work, what makes a risk assessment or audit process effective, and what ‘effective’ is.” She focused on these issues for her PhD at King’s College London, still very much a practitioner when she finished it in 2008. Her contemporaneous work for Lloyds Register Quality Assurance (LRQA) focused on ISO 27001 security management assessment for a wide range of organisations including one of the private hospital chains.

“Health care is fascinating because the need for clinical governance is completely enmeshed with security governance. You have to think about security from the perspective of the clinical, and information-sharing needs change as the patient’s condition changes.”

Her academic career began in 2005, when she began teaching undergraduates part-time at Royal Holloway; she moved to full-time in 2008. On arrival, she applied to participate in a “sandpit” run by the Engineering and Physical Sciences Research Council (EPSRC), the Economic and Social Research Council (ESRC), and the Technology Strategy Board. Coles-Kemp was part of a successful funding bid that emerged from this five-day immersive environment in which researchers collaborated on developing research questions, forming new teams, and preparing proposals. Led by Coles-Kemp, Visualisation and Other Methods of Expression (VOME) studied why people share what they do online and what they view as protection. Her remit: cover under-served communities. In partnership with Ashenden and Alison Adams, the Universities of Salford and Cranfield, the consultancy Consult Hyperion, and Sunderland City Council, Coles-Kemp worked directly with hard-to-reach communities such as the long-term unemployed in socio-economically deprived areas. In that environment, traditional research tools like focus groups and surveys were little help; new methods were needed

“We weren’t understanding what was of interest to those communities about data sharing because we were making all sorts of assumptions about what was important to them, and we had to get that out of the way to really understand data sharing in this context.”

For example, in these communities, few imagined they had much realistic chance of employment – so the risk that what they posted online might damage those prospects was meaningless. Similarly, in families who have been physically close for generations it often made more sense, for both safety and security, to share passwords. Coles-Kemp often heard, “We share a lot of other stuff.” The result was, “We got close enough to the communities to understand that it’s not that clear-cut, and we have to think about the overall safety and security of the individual within the family unit.”

Cartoon of Lizzie Coles-Kemp

Lizzie Coles-Kemp, drawn by Makayla Lewis

Their solution happened almost by accident. In VOME’s first year, ESRC offered a bursary to take part in a festival of social science. The VOME group partnered with the theatre company Bimbilibausa, led by clown Freya Stang, to present a short play about privacy choices in the workplace based on their research to date. The group took the play to Sunderland and invited the participants they had worked with to use the council’s voting paddles to select the story’s privacy outcome. Because whole families attended, the play led to intergenerational conversations about privacy and a meta-narrative that showed Coles-Kemp’s team the value of creative engagement techniques. The results encouraged Coles-Kemp to continue working with researchers and artists to develop a range of creative methods, including story sheets and Legos, to create three to four provocations or open questions that then let them drill down into individual issues. This work led to the grandmother-granddaughter paper, developed the understanding that led the work for the panopticon paper, revealed the complexity of everyday security and therefore the need for a family of information security models, and highlighted the importance of community and family interactions such as dominates narratives such as those found in The Archers when regulating the flow of information.

Creative engagement methods have both utility to the participant communities and methodological value. A further study, funded by the Arts and Humanities Research Council (AHRC), focused on families separated by prison sentences with the goal of understanding why they didn’t engage with the support services provided to them. In this case, the families proved to be more interested in talking about the journeys involved in prison visiting. “We went with that, figuring that if support services were important that would manifest itself,” Coles-Kemp says. The group worked with one of the Northeast England prisons to develop questions and create a large wall collage that is still in use as part of rehabilitation training when offenders are set to leave prison as well as a series of story cubes which form part of visitor induction to help families understand the kinds of issues that will confront them and introduce the support that’s available.

The creative engagement described here – story cubes, collages, drawings, Lego building – remains part of Coles-Kemp’s practice. CySeCa’s researchers, for example, included Makayla Lewis, who used her sketch noting and HCI and User Experience expertise to create cartoons based on interviews with security practitioners. These were then used to initiate discussions that exposed the information flows among people; the results were then compared to the results of network traffic analysis to find policy conflicts and gaps. In September 2016, Coles-Kemp started a five-year, EPSRC-funded fellowship programme to develop these techniques in conjunction with wider political and sociological theories of security in order to design and evaluate alternative approaches to securing digital services. Her work in this programme focuses on essential public services including welfare, health, housing, employment, education, and criminal justice. Coles-Kemp will continue to work with academic and practitioner communities in RISCS to both develop and disseminate these theoretical frameworks, practical techniques, and expertise.

The secondary questions security gap

Angela Sasse at CPDP2017

Angela Sasse at CPDP2017

The BBC reports that a common pastime on Facebook, comparing users’ top ten concerts, may present a security risk. The reason lies in the secondary security questions many websites use as fallback measures to identify users who have forgotten their passwords. Among the standard questions websites prompt users to provide answers for are the first gig you attended, your mother’s maiden name, your favourite movie, or the name of your first pet,

Quoted in the story, RISCS director and UCL professor Angela Sasse notes that it’s fairer to blame the sites for security breaches than individuals, arguing that using information that may be publicly available violates good security principles. In the past, similar stories have surfaced in the past relating to other social media trends, such as posting your “porn name” – which is typically made up of the name of your first pet coupled with the name of the street you grew up on.

Sasse told the BBC, “The risk is not so much publishing these lists, rather that somebody thinks it is a good idea to use questions like that as security credentials.”

An ancillary problem is that many sites ask the same questions, and in case of a data breach those answers can be used to gain access to other accounts the user holds.

At the National Cyber Security Centre blog, Kate R expands on how site owners and developers might manage these security questions so they leave less of a gap in security. First, she says, try to find alternatives. If that’s not possible, avoid questions with easily guessable answers that attackers can exploit. Dynamic questions, which depend on answers generated from data sites already hold may be a more secure choice than static questions if the pool of possible answers is large enough. Consider whether users can remember the answers they give, whether they are likely to use the same answers elsewhere, and how much effort the system will require of users.

Steven J. Murdoch

Steven J. Murdoch

On the Bentham’s Gaze blog, UCL Royal Society University Research Fellow Steven J. Murdoch expands on the theme that companies should stop passing the buck to consumers. In a discussion of standard security advice that’s unfit for the real world, he provides some useful advice. For example, he says password re-use across sites is a bigger problem than choosing passwords that are simple enough to remember; he recommends remembering unique passwords for the few most important sites, such as banking and email, and using a password manager for the rest. Similarly, although security experts typically tell users not to write down or share their passwords, this is poor advice within the context of a family, where doing so can be important. Murdoch goes on to discuss the difficulties of giving good security advice when individuals have so little control over the quality of the security measures imposed on them by others such as banks, lenders, mobile phone handset manufacturers, and so on.

Modeling and analysis of influence power for information security decisions

Iryna Yevseyeva, Charles Morisset  and Aad van Moorsel

Abstract

Users of computing systems and devices frequently make decisions related to information security, e. g., when choosing a password, deciding whether to log into an unfamiliar wireless network. Employers or other stakeholders may have a preference for certain outcomes, without being able to or having a desire to enforce a particular decision. In such situations, systems may build in design nudges to influence the decision making, e. g., by highlighting the employer’s preferred solution. In this paper we model influencing information security to identify which approaches to influencing are most effective and how they can be optimized. To do so, we extend traditional multi-criteria decision analysis models with modifiable criteria, to represent the available approaches an influencer has for influencing the choice of the decision maker. The notion of influence power is introduced to characterize the extent to which an influencer can influence decision makers. We illustrate our approach using data from a controlled experiment on techniques to influence which public wireless network users select. This allows us to calculate influence power and identify which design nudges exercise the most influence over user decisions.

Date: April 2016
Published: Performance Evaluation An International Journal Volume 9898                                                                                                                                                                                    Publisher: Elsevier
Publisher URL: http://www.sciencedirect.com/science/article/pii/S0166531616000043 
Full text: https://goo.gl/v8EOOg                                                                                                                                               DOI: http://dx.doi.org/10.1016/j.peva.2016.01.003

Personality and Social Framing in Privacy Decision-Making: A Study on Cookie Acceptance

Lynne M. Coventry, Debora Jeske, John M. Blythe, James Turland and Pam Briggs

Abstract

Despite their best intentions, people struggle with the realities of privacy protection and will often sacrifice privacy for convenience in their online activities. Individuals show systematic, personality dependent differences in their privacy decision making, which makes it interesting for those who seek to design ‘nudges’ designed to manipulate privacy behaviors. We explore such effects in a cookie decision task. Two hundred and ninety participants were given an incidental website review task that masked the true aim of the study. At the task outset, they were asked whether they wanted to accept a cookie in a message that either contained a social framing ‘nudge’ (they were told that either a majority or a minority of users like themselves had accepted the cookie) or contained no information about social norms (control). At the end of the task, participants were asked to complete a range of personality assessments (impulsivity, risk-taking, willingness to self-disclose and sociability). We found social framing to be an effective behavioral nudge, reducing cookie acceptance in the minority social norm condition. Further, we found personality effects in that those scoring highly on risk-taking and impulsivity were significantly more likely to accept the cookie. Finally, we found that the application of a social nudge could attenuate the personality effects of impulsivity and risk-taking. We explore the implications for those working in the privacy-by-design space.

Date: 7 September 2016
Published: Frontiers in Psychology 7 Article 1341: 1-12                                                                                              Publisher: Frontiers Research Foundation
Full Text: http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01341/full                                                                DOI: http://dx.doi.org/10.3389/fpsyg.2016.01341

Two-stage Security Controls Selection

Iryna Yevseyeva, Vitor Basto Fernandes, Aad van Moorsel, Helge Janicke and Michael Emmerich

Abstract

To protect a system from potential cyber security breaches and attacks, one needs to select efficient security controls, taking into account technical and institutional goals and constraints, such as available budget, enterprise activity, internal and external environment. Here we model the security controls selection problem as a two-stage decision making: First, managers and information security officers define the size of security budget. Second, the budget is distributed between various types of security controls. By viewing loss prevention with security controls measured as gains relative to a baseline (losses without applying security controls), we formulate the decision making process as a classical portfolio selection problem. The model assumes security budget allocation as a two objective problem, balancing risk and return, given a budget constraint. The Sharpe ratio is used to identify an optimal point on the Pareto front to spend the budget. At the management level the budget size is chosen by computing the trade-offs between Sharpe ratios and budget sizes. It is shown that the proposed two-stage decision making model can be solved by quadratic programming techniques, which is shown for a test case scenario with realistic data.

Date: 2016
Published: Procedia Computer Science Volume 100, 2016, pages 971 -97                                                      Publisher: Elsevier
Publisher URL: http://www.sciencedirect.com/science/article/pii/S1877050916324309                                                 Full text: https://goo.gl/hJKTkS                                                                                                                                                 DOI: https://doi.org/10.1016/j.procs.2016.09.261

 

Exploring the relationship between impulsivity and decision-making on mobile devices

Debora Jeske, Pam Brigg and Lynne Coventry

Abstract

Mobile devices offer a common platform for both leisure and work-related tasks, but this has resulted in a blurred boundary between home and work. In this paper, we explore the security implications of this blurred boundary, both for the worker and the employer. Mobile workers may not always make optimal security-related choices when “on the go” and more impulsive individuals may be particularly affected as they are considered more vulnerable to distraction. In this study, we used a task scenario, in which 104 users were asked to choose a wireless network when responding to work demands while out of the office. Eye-tracking data was obtained from a subsample of 40 of these participants in order to explore the effects of impulsivity on attention. Our results suggest that impulsive people are more frequent users of public devices and networks in their day-to-day interactions and are more likely to access their social networks on a regular basis. However, they are also likely to make risky decisions when working on-the-go, processing fewer features before making those decisions. These results suggest that those with high impulsivity may make more use of the mobile Internet options for both work and private purposes, but they also show attentional behavior patterns that suggest they make less considered security-sensitive decisions. The findings are discussed in terms of designs that might support enhanced deliberation, both in the moment and also in relation to longer term behaviors that would contribute to a better work–life balance.

Date: August 2016
Published:  Personal and Ubiquitous Computing Volume 20 (Issue 4) pp. 545-557                                                     Publisher: Springer
Publisher URL: https://link.springer.com/article/10.1007%2Fs00779-016-0938-4                                                  Full Text: https://goo.gl/Kx0YXm                                                                                                                                          DOI: http://dx.doi.org/10.1007/s00779-016-0938-4

Combining Qualitative Coding and Sentiment Analysis: Deconstructing Perceptions of Usable Security in Organisations

Ingolf Becker, Simon Parkin and M. Angela Sasse

Abstract

Background: A person’s security behavior is driven by underlying mental constructs, perceptions and beliefs. Examination of security behavior is often based on dialogue with users of security, which is analysed in textual form by qualitative research methods such as Qualitative Coding (QC). Yet QC has drawbacks: security issues are often time-sensitive but QC is extremely time consuming. QC is often carried out by a single researcher raising questions about the validity and repeatability of the results. Previous research has identified frequent tensions between security and other tasks, which can evoke emotional responses. Sentiment Analysis (SA) is simpler to execute and has been shown to deliver accurate and repeatable results. / Aim: By combining QC with SA we aim to focus the analysis to areas of strongly represented sentiment. Additionally we can analyse the variations in sentiment across populations for each of the QC codes, allowing us to identify beneficial and harmful security practises.                                                                                                                                                                                     Method: We code QC-annotated transcripts independently for sentiment. The distribution of sentiment for each QC code is statistically tested against the distribution of sentiment of all other QC codes. Similarly we also test the sentiment of each QC code across population subsets. We compare our findings with the results from the original QC analysis. Here we analyse 21 QC treated interviews with 9 security specialists, 9 developers and 3 usability experts, at 3 large organisations claiming to develop ‘usable security products’. This combines 4983 manually annotated instances of sentiment with 3737 quotations over 76 QC codes.                                                                                                                                                                                      Results: The methodology identified 83 statistically significant variations (with p < 0.05). The original qualitative analysis implied that organisations considered usability only when not doing so impacted revenue; our approach finds that developers appreciate usability tools to aid the development process, but that conflicts arise due to the disconnect of customers and developers. We find organisational cultures which put security first, creating an artificial trade-off for developers between security and usability.                                                                      Conclusions: Our methodology confirmed many of the QC findings, but gave more nuanced insights. The analysis across different organisations and employees con- firmed the repeatability of our approach, and provided evidence of variations that were lost in the QC findings alone. The methodology adds objectivity to QC in the form of reliable SA, but does not remove the need for interpretation. Instead it shifts it from large QC data to condensed statistical tables which make it more accessible to a wider audience not necessarily versed in QC and SA.

Date: 26 May 2016
Published: The LASER Workshop: Learning from Authoritative Security Experiment Results.                     Publisher: IEEE
Publisher URL: http://2016.laser-workshop.org/
Full Text: https://www.usenix.org/system/files/conference/laser2016/laser2016-paper-becker.pdf

 

                                                                                       

 

Towards robust experimental design for user studies in security and privacy

Kat Krol, Jonathan M. Spring, Simon Parkin and M. Angela Sasse

Abstract

Background: Human beings are an integral part of computer security, whether we actively participate or simply build the systems. Despite this importance, understanding users and their interaction with security is a blind spot for most security practitioners and designers. / Aim: Define principles for conducting experiments into usable security and privacy, to improve study robustness and usefulness. / Data: The authors’ experiences conducting several research projects complemented with a literature survey. Method: We extract principles based on relevance to the advancement of the state of the art. We then justify our choices by providing published experiments as cases of where the principles are and are not followed in practice to demonstrate the impact. Each principle is a discipline specific instantiation of desirable experiment-design elements as previously established in the domain of philosophy of science. / Results: Five high-priority principles – (i) give participants a primary task; (ii) incorporate realistic risk; (iii) avoid priming the participants; (iv) perform doubleblind experiments whenever possible and (v) think carefully about how meaning is assigned to the terms threat model, security, privacy, and usability. / Conclusion: The principles do not replace researcher acumen or experience, however they can provide a valuable service for facilitating evaluation, guiding younger researchers and students, and marking a baseline common language for discussing further improvements.

Date: 26 May 2016
Published: The LASER Workshop: Learning from Authoritative Security Experiment Results.                      Publisher: IEEE
Publisher URL: http://2016.laser-workshop.org/
Full Text: https://www.usenix.org/system/files/conference/laser2016/laser2016-paper-krol.pdf                                                       

“I don’t like putting my face on the Internet!”: An acceptance study of face biometrics as a CAPTCHA replacement

Kat Krol, Simon Parkin and M. Angela Sasse

Abstract

Biometric technologies have the potential to reduce the effort involved in securing personal activities online, such as purchasing goods and services. Verifying that a user session on a website is attributable to a real human is one candidate application, especially as the existing CAPTCHA technology is burdensome and can frustrate users. Here we examine the viability of biometrics as part of the consumer experience in this space. We invited 87 participants to take part in a lab study, using a realistic ticket-buying website with a range of human verification mechanisms including a face biometric technology. User perceptions and acceptance of the various security technologies were explored through interviews and a range of questionnaires within the study. The results show that some users wanted reassurance that their personal image will be protected or discarded after verifying, whereas others felt that if they saw enough people using face biometrics they would feel assured that it was trustworthy. Face biometrics were seen by some participants to be more suitable for high-security contexts, and by others as providing extra personal data that had unacceptable privacy implications.

Date: 26 May 2016
Published: Identity, Security and Behavior Analysis (ISBA), 2016 IEEE International Conference on
Publisher: IEEE
Publisher URL: http://ieeexplore.ieee.org/abstract/document/7477235/
Full Text: http://discovery.ucl.ac.uk/1475655/1/ISBA2016.pdf                                                                                         DOI: http://dx.doi.org/10.1109/ISBA.2016.7477235