Publications >>

Modeling and analysis of influence power for information security decisions

Iryna Yevseyeva, Charles Morisset  and Aad van Moorsel

Abstract

Users of computing systems and devices frequently make decisions related to information security, e. g., when choosing a password, deciding whether to log into an unfamiliar wireless network. Employers or other stakeholders may have a preference for certain outcomes, without being able to or having a desire to enforce a particular decision. In such situations, systems may build in design nudges to influence the decision making, e. g., by highlighting the employer’s preferred solution. In this paper we model influencing information security to identify which approaches to influencing are most effective and how they can be optimized. To do so, we extend traditional multi-criteria decision analysis models with modifiable criteria, to represent the available approaches an influencer has for influencing the choice of the decision maker. The notion of influence power is introduced to characterize the extent to which an influencer can influence decision makers. We illustrate our approach using data from a controlled experiment on techniques to influence which public wireless network users select. This allows us to calculate influence power and identify which design nudges exercise the most influence over user decisions.

Date: April 2016
Published: Performance Evaluation An International Journal Volume 9898                                                                                                                                                                                    Publisher: Elsevier
Publisher URL: http://www.sciencedirect.com/science/article/pii/S0166531616000043 
Full text: https://goo.gl/v8EOOg                                                                                                                                               DOI: http://dx.doi.org/10.1016/j.peva.2016.01.003

Personality and Social Framing in Privacy Decision-Making: A Study on Cookie Acceptance

Lynne M. Coventry, Debora Jeske, John M. Blythe, James Turland and Pam Briggs

Abstract

Despite their best intentions, people struggle with the realities of privacy protection and will often sacrifice privacy for convenience in their online activities. Individuals show systematic, personality dependent differences in their privacy decision making, which makes it interesting for those who seek to design ‘nudges’ designed to manipulate privacy behaviors. We explore such effects in a cookie decision task. Two hundred and ninety participants were given an incidental website review task that masked the true aim of the study. At the task outset, they were asked whether they wanted to accept a cookie in a message that either contained a social framing ‘nudge’ (they were told that either a majority or a minority of users like themselves had accepted the cookie) or contained no information about social norms (control). At the end of the task, participants were asked to complete a range of personality assessments (impulsivity, risk-taking, willingness to self-disclose and sociability). We found social framing to be an effective behavioral nudge, reducing cookie acceptance in the minority social norm condition. Further, we found personality effects in that those scoring highly on risk-taking and impulsivity were significantly more likely to accept the cookie. Finally, we found that the application of a social nudge could attenuate the personality effects of impulsivity and risk-taking. We explore the implications for those working in the privacy-by-design space.

Date: 7 September 2016
Published: Frontiers in Psychology 7 Article 1341: 1-12                                                                                              Publisher: Frontiers Research Foundation
Full Text: http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01341/full                                                                DOI: http://dx.doi.org/10.3389/fpsyg.2016.01341

Two-stage Security Controls Selection

Iryna Yevseyeva, Vitor Basto Fernandes, Aad van Moorsel, Helge Janicke and Michael Emmerich

Abstract

To protect a system from potential cyber security breaches and attacks, one needs to select efficient security controls, taking into account technical and institutional goals and constraints, such as available budget, enterprise activity, internal and external environment. Here we model the security controls selection problem as a two-stage decision making: First, managers and information security officers define the size of security budget. Second, the budget is distributed between various types of security controls. By viewing loss prevention with security controls measured as gains relative to a baseline (losses without applying security controls), we formulate the decision making process as a classical portfolio selection problem. The model assumes security budget allocation as a two objective problem, balancing risk and return, given a budget constraint. The Sharpe ratio is used to identify an optimal point on the Pareto front to spend the budget. At the management level the budget size is chosen by computing the trade-offs between Sharpe ratios and budget sizes. It is shown that the proposed two-stage decision making model can be solved by quadratic programming techniques, which is shown for a test case scenario with realistic data.

Date: 2016
Published: Procedia Computer Science Volume 100, 2016, pages 971 -97                                                      Publisher: Elsevier
Publisher URL: http://www.sciencedirect.com/science/article/pii/S1877050916324309                                                 Full text: https://goo.gl/hJKTkS                                                                                                                                                 DOI: https://doi.org/10.1016/j.procs.2016.09.261

 

Exploring the relationship between impulsivity and decision-making on mobile devices

Debora Jeske, Pam Brigg and Lynne Coventry

Abstract

Mobile devices offer a common platform for both leisure and work-related tasks, but this has resulted in a blurred boundary between home and work. In this paper, we explore the security implications of this blurred boundary, both for the worker and the employer. Mobile workers may not always make optimal security-related choices when “on the go” and more impulsive individuals may be particularly affected as they are considered more vulnerable to distraction. In this study, we used a task scenario, in which 104 users were asked to choose a wireless network when responding to work demands while out of the office. Eye-tracking data was obtained from a subsample of 40 of these participants in order to explore the effects of impulsivity on attention. Our results suggest that impulsive people are more frequent users of public devices and networks in their day-to-day interactions and are more likely to access their social networks on a regular basis. However, they are also likely to make risky decisions when working on-the-go, processing fewer features before making those decisions. These results suggest that those with high impulsivity may make more use of the mobile Internet options for both work and private purposes, but they also show attentional behavior patterns that suggest they make less considered security-sensitive decisions. The findings are discussed in terms of designs that might support enhanced deliberation, both in the moment and also in relation to longer term behaviors that would contribute to a better work–life balance.

Date: August 2016
Published:  Personal and Ubiquitous Computing Volume 20 (Issue 4) pp. 545-557                                                     Publisher: Springer
Publisher URL: https://link.springer.com/article/10.1007%2Fs00779-016-0938-4                                                  Full Text: https://goo.gl/Kx0YXm                                                                                                                                          DOI: http://dx.doi.org/10.1007/s00779-016-0938-4

Combining Qualitative Coding and Sentiment Analysis: Deconstructing Perceptions of Usable Security in Organisations

Ingolf Becker, Simon Parkin and M. Angela Sasse

Abstract

Background: A person’s security behavior is driven by underlying mental constructs, perceptions and beliefs. Examination of security behavior is often based on dialogue with users of security, which is analysed in textual form by qualitative research methods such as Qualitative Coding (QC). Yet QC has drawbacks: security issues are often time-sensitive but QC is extremely time consuming. QC is often carried out by a single researcher raising questions about the validity and repeatability of the results. Previous research has identified frequent tensions between security and other tasks, which can evoke emotional responses. Sentiment Analysis (SA) is simpler to execute and has been shown to deliver accurate and repeatable results. / Aim: By combining QC with SA we aim to focus the analysis to areas of strongly represented sentiment. Additionally we can analyse the variations in sentiment across populations for each of the QC codes, allowing us to identify beneficial and harmful security practises.                                                                                                                                                                                     Method: We code QC-annotated transcripts independently for sentiment. The distribution of sentiment for each QC code is statistically tested against the distribution of sentiment of all other QC codes. Similarly we also test the sentiment of each QC code across population subsets. We compare our findings with the results from the original QC analysis. Here we analyse 21 QC treated interviews with 9 security specialists, 9 developers and 3 usability experts, at 3 large organisations claiming to develop ‘usable security products’. This combines 4983 manually annotated instances of sentiment with 3737 quotations over 76 QC codes.                                                                                                                                                                                      Results: The methodology identified 83 statistically significant variations (with p < 0.05). The original qualitative analysis implied that organisations considered usability only when not doing so impacted revenue; our approach finds that developers appreciate usability tools to aid the development process, but that conflicts arise due to the disconnect of customers and developers. We find organisational cultures which put security first, creating an artificial trade-off for developers between security and usability.                                                                      Conclusions: Our methodology confirmed many of the QC findings, but gave more nuanced insights. The analysis across different organisations and employees con- firmed the repeatability of our approach, and provided evidence of variations that were lost in the QC findings alone. The methodology adds objectivity to QC in the form of reliable SA, but does not remove the need for interpretation. Instead it shifts it from large QC data to condensed statistical tables which make it more accessible to a wider audience not necessarily versed in QC and SA.

Date: 26 May 2016
Published: The LASER Workshop: Learning from Authoritative Security Experiment Results.                     Publisher: IEEE
Publisher URL: http://2016.laser-workshop.org/
Full Text: https://www.usenix.org/system/files/conference/laser2016/laser2016-paper-becker.pdf

 

                                                                                       

 

Towards robust experimental design for user studies in security and privacy

Kat Krol, Jonathan M. Spring, Simon Parkin and M. Angela Sasse

Abstract

Background: Human beings are an integral part of computer security, whether we actively participate or simply build the systems. Despite this importance, understanding users and their interaction with security is a blind spot for most security practitioners and designers. / Aim: Define principles for conducting experiments into usable security and privacy, to improve study robustness and usefulness. / Data: The authors’ experiences conducting several research projects complemented with a literature survey. Method: We extract principles based on relevance to the advancement of the state of the art. We then justify our choices by providing published experiments as cases of where the principles are and are not followed in practice to demonstrate the impact. Each principle is a discipline specific instantiation of desirable experiment-design elements as previously established in the domain of philosophy of science. / Results: Five high-priority principles – (i) give participants a primary task; (ii) incorporate realistic risk; (iii) avoid priming the participants; (iv) perform doubleblind experiments whenever possible and (v) think carefully about how meaning is assigned to the terms threat model, security, privacy, and usability. / Conclusion: The principles do not replace researcher acumen or experience, however they can provide a valuable service for facilitating evaluation, guiding younger researchers and students, and marking a baseline common language for discussing further improvements.

Date: 26 May 2016
Published: The LASER Workshop: Learning from Authoritative Security Experiment Results.                      Publisher: IEEE
Publisher URL: http://2016.laser-workshop.org/
Full Text: https://www.usenix.org/system/files/conference/laser2016/laser2016-paper-krol.pdf                                                       

“I don’t like putting my face on the Internet!”: An acceptance study of face biometrics as a CAPTCHA replacement

Kat Krol, Simon Parkin and M. Angela Sasse

Abstract

Biometric technologies have the potential to reduce the effort involved in securing personal activities online, such as purchasing goods and services. Verifying that a user session on a website is attributable to a real human is one candidate application, especially as the existing CAPTCHA technology is burdensome and can frustrate users. Here we examine the viability of biometrics as part of the consumer experience in this space. We invited 87 participants to take part in a lab study, using a realistic ticket-buying website with a range of human verification mechanisms including a face biometric technology. User perceptions and acceptance of the various security technologies were explored through interviews and a range of questionnaires within the study. The results show that some users wanted reassurance that their personal image will be protected or discarded after verifying, whereas others felt that if they saw enough people using face biometrics they would feel assured that it was trustworthy. Face biometrics were seen by some participants to be more suitable for high-security contexts, and by others as providing extra personal data that had unacceptable privacy implications.

Date: 26 May 2016
Published: Identity, Security and Behavior Analysis (ISBA), 2016 IEEE International Conference on
Publisher: IEEE
Publisher URL: http://ieeexplore.ieee.org/abstract/document/7477235/
Full Text: http://discovery.ucl.ac.uk/1475655/1/ISBA2016.pdf                                                                                         DOI: http://dx.doi.org/10.1109/ISBA.2016.7477235

Applying Cognitive Control Modes to Identify Security Fatigue Hotspots

Simon Parkin, Kat Krol, Ingolf Becker and M. Angela Sasse

Abstract

Security tasks can burden the individual, to the extent that security fatigue promotes had security habits. Here we revisit a series of user-centred studies of security mechanisms as part of regular routines, such as two-factor authentication. These studies inform reflection upon the perceived contributors and consequences of fatigue, and strategies that a person may adopt in response to feeling overburdened by security. The fatigue produced by security tasks is then framed using a model of cognitive control modes, which explores human performance and error. Security tasks are then considered in terms of modes such as unconscious routines and knowledge-based ad-hoc approaches. Conscious attention can support adaptation to novel security situations, but is error-prone and tiring; both simple security routines and technology-driven automation can minimise effort, but may miss cues from the environment that a nuanced response is required.

Date: 22 June 2016
Published: SOUPS Workshop on Security Fatigue                                                                                                           Publisher: USENIX
Publisher URL: https://www.usenix.org/conference/soups2016/workshop-program/wsf/presentation/parkin
Full Text: https://www.usenix.org/system/files/conference/soups2016/wsf16_paperparkin.pdf                                                                                                                                                       

Productive Security: A Scalable Methodology for Analysing Employee Security Behaviour

Adam Beautement, Ingolf Becker, Simon Parkin, Kat Krol and M. Angela Sasse

Abstract

Organisational security policies are often written without sufficiently taking in to account the goals and capabilities of the employees that must follow them. Effective security management requires that security managers are able to assess the effectiveness of their policies, including their impact on employee behaviour. We present a methodology for gathering large scale data sets on employee behaviour and attitudes via scenario-based surveys. The survey questions are grounded in rich data drawn from interviews, and probe perceptions of security measures and their impact. Here we study employees of a large multinational company, demonstrating that our approach is capable of determining important differences between various population groups. We also report that our work has been used to set policy within the partner organisation, illustrating the real-world impact of our research.

Date: 22 June 2016
Published: Paper included in the Proceedings of the Twelfth Symposium on Usable Privacy and Security (SOUPS 2016).                                                                                                                                                                              Publisher: USENIX
Publisher URL: https://www.usenix.org/system/files/conference/soups2016/soups2016-paper-beautement.pdf
                            

Social Learning in Systems Security Modelling

Tristan Caulfield, Michelle Catherine Baddeley and David Pym

Abstract

Usable Systems modelling can be used to help improve decisions around security policy. By modelling a complex system, the interactions between its structure, environment, technology, policies, and human agents can be understood and the effects of different policy choices on the system can be explored. Of key importance is capturing the behaviour of human agents within the system. In this paper we present a model of social learning from behavioural economics and then integrate it into a mathematical systems modelling framework. We demonstrate this with an example: employees deciding whether or not to challenge people without ID badges in the office.
Date: 23 September 2016
Published: Social Stimulation Conference Paper September 2016
Publisher: Research Gate
Publisher URL: https://www.researchgate.net/publication/308632330_Social_Learning_in_Systems_Security_Modelling