Background: A person’s security behavior is driven by underlying mental constructs, perceptions and beliefs. Examination of security behavior is often based on dialogue with users of security, which is analysed in textual form by qualitative research methods such as Qualitative Coding (QC). Yet QC has drawbacks: security issues are often time-sensitive but QC is extremely timeconsuming. QC is often carried out by a single researcher raising questions about the validity and repeatability of the results. Previous research has identified frequent tensions between security and other tasks, which can evoke emotional responses. Sentiment Analysis (SA) is simpler to execute and has been shown to deliver accurate and repeatable results. / Aim: By combining QC with SA we aim to focus the analysis to areas of strongly represented sentiment. Additionally we can analyse the variations in sentiment across populations for each of the QC codes, allowing us to identify beneficial and harmful security practises. Method: We code QC-annotated transcripts independently for sentiment. The distribution of sentiment for each QC code is statistically tested against the distribution of sentiment of all other QC codes. Similarly we also test the sentiment of each QC code across population subsets. We compare our findings with the results from the original QC analysis. Here we analyse 21 QCtreated interviews with 9 security specialists, 9 developers and 3 usability experts, at 3 large organisations claiming to develop ‘usable security products’. This combines 4983 manually annotated instances of sentiment with 3737 quotations over 76 QC codes. / Results: The methodology identified 83 statistically significant variations (with p < 0.05). The original qualitative analysis implied that organisations considered usability only when not doing so impacted revenue; our approach finds that developers appreciate usability tools to aid the development process, but that conflicts arise due to the disconnect of customers and developers. We find organisational cultures which put security first, creating an artificial trade-off for developers between security and usability. / Conclusions: Our methodology confirmed many of the QC findings, but gave more nuanced insights. The analysis across different organisations and employees con- firmed the repeatability of our approach, and provided evidence of variations that were lost in the QC findings alone. The methodology adds objectivity to QC in the form of reliable SA, but does not remove the need for interpretation. Instead it shifts it from large QC data to condensed statistical tables which make it more accessible to a wider audience not necessarily versed in QC and SA.
Background: Human beings are an integral part of computer security, whether we actively participate or simply build the systems. Despite this importance, understanding users and their interaction with security is a blind spot for most security practitioners and designers. / Aim: Define principles for conducting experiments into usable security and privacy, to improve study robustness and usefulness. / Data: The authors’ experiences conducting several research projects complemented with a literature survey. Method: We extract principles based on relevance to the advancement of the state of the art. We then justify our choices by providing published experiments as cases of where the principles are and are not followed in practice to demonstrate the impact. Each principle is a discipline specific instantiation of desirable experiment-design elements as previously established in the domain of philosophy of science. / Results: Five high-priority principles – (i) give participants a primary task; (ii) incorporate realistic risk; (iii) avoid priming the participants; (iv) perform doubleblind experiments whenever possible and (v) think carefully about how meaning is assigned to the terms threat model, security, privacy, and usability. / Conclusion: The principles do not replace researcher acumen or experience, however they can provide a valuable service for facilitating evaluation, guiding younger researchers and students, and marking a baseline common language for discussing further improvements.
Biometric technologies have the potential to reduce the effort involved in securing personal activities online, such as purchasing goods and services. Verifying that a user session on a website is attributable to a real human is one candidate application, especially as the existing CAPTCHA technology is burdensome and can frustrate users. Here we examine the viability of biometrics as part of the consumer experience in this space. We invited 87 participants to take part in a lab study, using a realistic ticket-buying website with a range of human veriﬁcation mechanisms including a face biometric technology. User perceptions and acceptance of the various security technologies were explored through interviews and a range of questionnaires within the study. The results show that some users wanted reassurance that their personal image will be protected or discarded after verifying, whereas others felt that if they saw enough people using face biometrics they would feel assured that it was trustworthy. Face biometrics were seen by some participants to be more suitable for high-security contexts, and by others as providing extra personal data that had unacceptable privacy implications.
Security tasks can burden the individual, to the extent that security fatigue promotes had security habits. Here we revisit a series of user-centred studies of security mechanisms as part of regular routines, such as two-factor authentication. These studies inform reflection upon the perceived contributors and consequences of fatigue, and strategies that a person may adopt in response to feeling overburdened by security. The fatigue produced by security tasks is then framed using a model of cognitive control modes, which explores human performance and error. Security tasks are then considered in terms of modes such as unconscious routines and knowledge-based ad-hoc approaches. Conscious attention can support adaptation to novel security situations, but is error-prone and tiring; both simple security routines and technology-driven automation can minimise effort, but may miss cues from the environment that a nuanced response is required.
Organisational security policies are often written without suﬃciently taking in to account the goals and capabilities of the employees that must follow them. Eﬀective security management requires that security managers are able to assess the eﬀectiveness of their policies, including their impact on employee behaviour. We present a methodology for gathering large scale data sets on employee behaviour and attitudes via scenario-based surveys. The survey questions are grounded in rich data drawn from interviews, and probe perceptions of security measures and their impact. Here we study employees of a large multinational company, demonstrating that our approach is capable of determining important diﬀerences between various population groups. We also report that our work has been used to set policy within the partner organisation, illustrating the real-world impact of our research.
Usable Systems modelling can be used to help improve decisions around security policy. By modelling a complex system, the interactions between its structure, environment, technology, policies, and human agents can be understood and the effects of different policy choices on the system can be explored. Of key importance is capturing the behaviour of human agents within the system. In this paper we present a model of social learning from behavioural economics and then integrate it into a mathematical systems modelling framework. We demonstrate this with an example: employees deciding whether or not to challenge people without ID badges in the office. Date: 23 September 2016 Published: Social Stimulation Conference Paper September 2016 Publisher: Research Gate Publisher URL: https://www.researchgate.net/publication/308632330_Social_Learning_in_Systems_Security_Modelling
The presence of unpatched, exploitable vulnerabilities in software is a prerequisite for many forms of cyberattack. Because of the almost inevitable discovery of a vulnerability and creation of an exploit for all types of software, multiple layers of security are usually used to protect vital systems from compromise. Accordingly, attackers seeking to access protected systems must circumvent all of these layers. Resource- and budget-constrained defenders must choose when to execute actions such as patching, monitoring and cleaning infected systems in order to best protect their networks. Similarly, attackers must also decide when to attempt to penetrate a system and which exploit to use when doing so. We present an approach to modelling computer networks and vulnerabilities that can be used to find the optimal allocation of time to different system defence tasks. The vulnerabilities, state of the system and actions by the attacker and defender are used to build partially observable stochastic games. These games capture the uncertainty about the current state of the system and the uncertainty about the future. The solution to these games is a policy, which indicates the optimal actions to take for a given belief about the current state of the system. We demonstrate this approach using several different network configurations and types of player. We consider a trade-off for the system administrator, where they must allocate their time to performing either security-related tasks or performing other required non-security tasks. The results presented highlight that, with the requirement for other tasks to be performed, following the optimal policy means spending time on only the most essential security-related tasks, while the majority of time is spent on non-security tasks.
When investing in cyber security resources, information security managers have to follow effective decision-making strategies. We refer to this as the cyber security investment challenge. In this paper, we consider three possible decision support methodologies for security managers to tackle this challenge. We consider methods based on game theory, combinatorial optimisation, and a hybrid of the two. Our modelling starts by building a framework where we can investigate the effectiveness of a cyber security control regarding the protection of different assets seen as targets in presence of commodity threats. As game theory captures the interaction between the endogenous organisation’s and attackers’ decisions, we consider a 2-person control game between the security manager who has to choose among different implementation levels of a cyber security control, and a commodity attacker who chooses among different targets to attack. The pure game theoretical methodology consists of a large game including all controls and all threats. In the hybrid methodology the game solutions of individual control-games along with their direct costs (e.g. financial) are combined with a Knapsack algorithm to derive an optimal investment strategy. The combinatorial optimisation technique consists of a multi-objective multiple choice Knapsack based strategy. To compare these approaches we built a decision support tool and a case study regarding current government guidelines. The endeavour of this work is to highlight the weaknesses and strengths of different investment methodologies for cyber security, the benefit of their interaction, and the impact that indirect costs have on cyber security investment. Going a step further in validating our work, we have shown that our decision support tool provides the same advice with the one advocated by the UK government with regard to the requirements for basic technical protection from cyber attacks in SMEs.
We consider the problem of optimal investment in cyber-security by an enterprise. Optimality is measured with respect to the overall (1) monetary cost of implementation, (2) negative side-effects of cyber-security controls (indirect costs), and (3) mitigation of the cyber-security risk. We consider “passive” and “reactive” threats, the former representing the case where attack attempts are independent of the defender’s plan, the latter, where attackers can adapt and react to an implemented cyber-security defence. Moreover, we model in three different ways the combined effect of multiple cyber-security controls, depending on their degree of complementarity and correlation. We also consider multi-stage attacks and the potential correlations in the success of different stages. First, we formalize the problem as a non-linear multi-objective integer programming. We then convert them into Mixed Integer Linear Programs (MILP) that very efficiently solve for the exact Pareto-optimal solutions even when the number of available controls is large. In our case study, we consider 27 of the most typical security controls, each with multiple intensity levels of implementation, and 37 common vulnerabilities facing a typical SME. We compare our findings against expert-recommended critical controls. We then investigate the effect of the security models on the resulting optimal plan and contrast the merits of different security metrics. In particular, we show the superior robustness of the security measures based on the “reactive” threat model, and the significance of the hitherto overlooked role of correlations.
This paper presents the design and the results of a cross-cultural study of user perceptions and attitudes toward electronic payment methods. We conduct a series of semi-structured interviews involving forty participants (20 in London, UK, and 20 in Manhattan, KS, USA) to explore how individuals use the mechanisms available to them within their routine payment and banking activities. We also study their comprehension of payment processes, the perceived effort and impact of using different methods, as well as direct or indirect recollections of (suspected or actual) fraud and related interactions with banks and retailers. By comparing UK and US participants, we also elicit commonalities and differences that may help better understand, if not predict, attitudes of US customers once technologies like Chip-and-PIN are rolled out – for instance, several US participants were confused by how to use it, while UK participants found it convenient. Our results show that purchasing habits as well as the availability of rewards schemes are primary criteria influencing choices relating to payment technologies, and that inconsistencies, glitches, and other difficulties with newer technologies generate frustration sometimes leading to complete avoidance of new payment methods.