Monica Whitty, Matthew Edwards, Michael Levi, Claudia Peersman, Awais Rashid, Angela Sasse, Tom Sorell, Gianluca Stringhini
Mass-marketing frauds (MMFs) are on the increase. Given the amount of monies lost and the psychological impact of MMFs there is an urgent need to develop new and effective methods to prevent more of these crimes. This paper reports the early planning of automated methods our interdisciplinary team are developing to prevent and detect MMF. Importantly, the paper presents the ethical and social constraints involved in such a model and suggests concerns others might also consider when developing automated systems.
Date: April 2017
Published: WWW ’17 Companion Proceedings of the 26th International Conference on World Wide Web Companion, Pages 1311-1314
Publisher URL: https://dl.acm.org/citation.cfm?doid=3041021.3053891
Full Text: Available here (opens PDF)
The RISCS Annual Report 2017 was released at the UK Cyber Security Research Institutes Conference in October 2017, and is available to download here (opens PDF)
Iryna Yevseyeva, Charles Morisset and Aad van Moorsel
Users of computing systems and devices frequently make decisions related to information security, e. g., when choosing a password, deciding whether to log into an unfamiliar wireless network. Employers or other stakeholders may have a preference for certain outcomes, without being able to or having a desire to enforce a particular decision. In such situations, systems may build in design nudges to influence the decision making, e. g., by highlighting the employer’s preferred solution. In this paper we model influencing information security to identify which approaches to influencing are most effective and how they can be optimized. To do so, we extend traditional multi-criteria decision analysis models with modifiable criteria, to represent the available approaches an influencer has for influencing the choice of the decision maker. The notion of influence power is introduced to characterize the extent to which an influencer can influence decision makers. We illustrate our approach using data from a controlled experiment on techniques to influence which public wireless network users select. This allows us to calculate influence power and identify which design nudges exercise the most influence over user decisions.
Date: April 2016
Published: Performance Evaluation An International Journal Volume 9898 Publisher: Elsevier
Publisher URL: http://www.sciencedirect.com/science/article/pii/S0166531616000043
Full text: https://goo.gl/v8EOOg DOI: http://dx.doi.org/10.1016/j.peva.2016.01.003
Lynne M. Coventry, Debora Jeske, John M. Blythe, James Turland and Pam Briggs
Despite their best intentions, people struggle with the realities of privacy protection and will often sacrifice privacy for convenience in their online activities. Individuals show systematic, personality dependent differences in their privacy decision making, which makes it interesting for those who seek to design ‘nudges’ designed to manipulate privacy behaviors. We explore such effects in a cookie decision task. Two hundred and ninety participants were given an incidental website review task that masked the true aim of the study. At the task outset, they were asked whether they wanted to accept a cookie in a message that either contained a social framing ‘nudge’ (they were told that either a majority or a minority of users like themselves had accepted the cookie) or contained no information about social norms (control). At the end of the task, participants were asked to complete a range of personality assessments (impulsivity, risk-taking, willingness to self-disclose and sociability). We found social framing to be an effective behavioral nudge, reducing cookie acceptance in the minority social norm condition. Further, we found personality effects in that those scoring highly on risk-taking and impulsivity were significantly more likely to accept the cookie. Finally, we found that the application of a social nudge could attenuate the personality effects of impulsivity and risk-taking. We explore the implications for those working in the privacy-by-design space.
Date: 7 September 2016
Published: Frontiers in Psychology 7 Article 1341: 1-12 Publisher: Frontiers Research Foundation
Full Text: http://journal.frontiersin.org/article/10.3389/fpsyg.2016.01341/full DOI: http://dx.doi.org/10.3389/fpsyg.2016.01341
Iryna Yevseyeva, Vitor Basto Fernandes, Aad van Moorsel, Helge Janicke and Michael Emmerich
To protect a system from potential cyber security breaches and attacks, one needs to select efficient security controls, taking into account technical and institutional goals and constraints, such as available budget, enterprise activity, internal and external environment. Here we model the security controls selection problem as a two-stage decision making: First, managers and information security officers define the size of security budget. Second, the budget is distributed between various types of security controls. By viewing loss prevention with security controls measured as gains relative to a baseline (losses without applying security controls), we formulate the decision making process as a classical portfolio selection problem. The model assumes security budget allocation as a two objective problem, balancing risk and return, given a budget constraint. The Sharpe ratio is used to identify an optimal point on the Pareto front to spend the budget. At the management level the budget size is chosen by computing the trade-offs between Sharpe ratios and budget sizes. It is shown that the proposed two-stage decision making model can be solved by quadratic programming techniques, which is shown for a test case scenario with realistic data.
Published: Procedia Computer Science Volume 100, 2016, pages 971-97
Publisher URL: http://www.sciencedirect.com/science/article/pii/S1877050916324309
Full text: https://goo.gl/hJKTkS