RISCS Hub Inaugural Issue of the Journal of Cybersecurity Published

The inaugural issue of the Journal of Cybersecurity will be published online today, December 11th.   The Journal was created by RISCS members, in collaboration with colleagues in the UK and abroad, as a high-quality venue for publishing research into science of cyber security. The Journal welcomes submission of evidence-based research from all disciplinary backgrounds, and in particular multi-disciplinary research.

Optimising Time Allocation for Network Defense

Tristan Caulfield and Andrew Fielder

Abstract

The presence of unpatched, exploitable vulnerabilities in software is a prerequisite for many forms of cyberattack. Because of the almost inevitable discovery of a vulnerability and creation of an exploit for all types of software, multiple layers of security are usually used to protect vital systems from compromise. Accordingly, attackers seeking to access protected systems must circumvent all of these layers. Resource- and budget-constrained defenders must choose when to execute actions such as patching, monitoring and cleaning infected systems in order to best protect their networks. Similarly, attackers must also decide when to attempt to penetrate a system and which exploit to use when doing so. We present an approach to modelling computer networks and vulnerabilities that can be used to find the optimal allocation of time to different system defence tasks. The vulnerabilities, state of the system and actions by the attacker and defender are used to build partially observable stochastic games. These games capture the uncertainty about the current state of the system and the uncertainty about the future. The solution to these games is a policy, which indicates the optimal actions to take for a given belief about the current state of the system. We demonstrate this approach using several different network configurations and types of player. We consider a trade-off for the system administrator, where they must allocate their time to performing either security-related tasks or performing other required non-security tasks. The results presented highlight that, with the requirement for other tasks to be performed, following the optimal policy means spending time on only the most essential security-related tasks, while the majority of time is spent on non-security tasks.

Date: November 5, 2015
Published: Journal of Cybersecurity, 2015.
Publisher: Oxford University Press
Publisher URL: http://cybersecurity.oxfordjournals.org/content/early/2015/11/05/cybsec.tyv002
Full Text: http://cybersecurity.oxfordjournals.org/content/early/2015/11/05/cybsec.tyv002.full-text.pdf
DOI: http://dx.doi.org/10.1093/cybsec/tyv002
Open Access: http://cybersecurity.oxfordjournals.org/content/early/2015/11/05/cybsec.tyv002.full-text.pdf

An Inclusive, Value-Sensitive Design Perspective on Future Identity Technologies

Lisa Thomas and Pamela Briggs

Abstract

Identity technologies constitute one of the fastest growing areas for research and development, driven by both commercial and administrative imperatives. Crucially, they constitute the means by which we include or exclude individuals and groups in terms of access to goods, services or information — yet few developments in this space embrace an inclusive or value sensitive design philosophy. We describe a rigorous exercise in which we source scenarios that capture new research in the identity space and use these as probes in an inclusive design process. Workshops were held with six marginalized community groups: young people, older adults, refugees, black minority ethnic (BME) women, people with disabilities, and mental health service users. Our findings echo Herzberg’s two-factor theory in which we are able to identify a set of relatively common values around sources of potential dissatisfaction (hygiene factors) as well as a set of motivators that are differentially valued across communities.

Date: October, 2015
Published: ACM Transactions on Computer-Human Interaction (TOCHI), Volume 22 Issue 5, October 2015.
Publisher: ACM
Publisher URL: https://dl.acm.org/citation.cfm?doid=2814459.2778972
Full Text: https://dl.acm.org/ft_gateway.cfm?id=2778972
DOI: http://dx.doi.org/10.1145/2778972
Open Access: http://nrl.northumbria.ac.uk/23871/

Improving Security Policy Decisions with Models

 Tristan Caulfield and David Pym

Abstract

A rigorous methodology, grounded in mathematical systems modeling and the economics of decision making, can help security managers explore the operational consequences of their design choices and make better decisions.

Date: October 28, 2015
Published: IEEE Security & Privacy, Volume 13, Issue 5, 2015, Special Issue, SPSI: Economics of Cybersecurity, pp 34 – 41.
Publisher: IEEE
Full Text: http://dx.doi.org/10.1109/MSP.2015.97

RISCS HubFred Schneider Gives Keynote Talk at the UK Cyber Security Research Conference 2015

Professor Fred Schneider of Cornell University gave a keynote address to the UK Cyber Security Research Conference 2015 on 28th October. Schneider outlined joint work with Deirdre Mulligan (UC Berkeley) to create a doctrine of cyber security policy that would serve as a framework of legal principles which could be followed in making regulatory and economic decisions and that create incentives for designing and deploying trustworthy systems.

Schneider began by reviewing how cyber security “really evolved”. Cyber security began with 1960s time-sharing computers, which opened up concerns that one user could access another’s computation. The first idea for a solution was prevention: don’t build in vulnerabilities.

Failings rapidly emerged: there is no clear way to measure security, and this idea is unworkable for large, complex systems and impedes innovation. Even if those problems could be solved, preventing vulnerabilities ignores issues such as social engineering and the dynamic environment; everything would have to be reverified every time anything changed. As a result, the US government stopped investing in program verification, although interest has lately revived because of automated tools and verification may now be part of the solution.

The second idea was risk management – that is, investing in security to reduce expected losses. The problems here are that there’s no reliable way to estimate the probability of an attack and that there is no way to value losses like confidentiality, integrity, recovery, and costs to third parties. In an attack that harvests personal information in order to apply fraudulently for a credit card, the losses are clear-cut. But what are the costs of taking out a piece of the power grid when the components take a year to replace because they’re custom-made just-in-time? Further, the most attractive targets have incentives not to reveal they’ve been attacked. Finally, underinvesting in security turns out to be a rational strategy because individuals don’t reap the full benefit themselves and can’t control vulnerabilities in any case; even a good system ages. In life insurance, the past is a good predictor of the future – but not in software.

By the 2000s, cyber security had moved on to the doctrine of accountability: the idea that every packet should be attributable so that if it’s evil someone can be arrested. Like CCTV, this system enables retrospective attribution and punishment, not real-time security. Attribution is often not possible because of national borders, non-state actors, and the weak binding between machines and individual users, howerver. It’s also incomplete, as not all attacks are crimes (some may be acts of war), and the set of options for privacy is very narrow. This doctrine is still being debated – Schneider cited ID cards as an example – but has not gained much traction.

As a way of moving forward, Schneider and Mulligan began by considering how cyber security works from an economic perspective. In those terms, cyber security is a public good; that is, it’s non-rivalrous; and non-excludable. In public health, which also shares those qualities, even though no one thinks public health will ever be a solved problem the goals are clear: prompt production of health and manage its absence.

Where public health is about people, cyber security is about people plus computers. Its parallel goals, therefore, might be: prompt the production of cyber security; manage the remaining insecurity; and reach political agreement to balance individual rights and public welfare.

Many of the means for following this policy are already with us and in development: formal methods, testing, standards for development, analysis, testing, education, training, and certification. As social pressure has not been sufficient, Managing the remaining insecurity involves several strategies. Schneider and Mulligan propose incentives in the form of liability for producers. Patching needs to be reliable, rapidly deployed, and widespread; no standard requires software producers to provide patches, but they could be penalized for not doing so. Above the level of individual machines, lacking a way to patch critical parts of the infrastructure such as encryption, the entire infrastructure is brittle. Schneider also proposed a requirement for system diversity: even though it’s economically cheaper if everyone uses the same system, diversity can be engineered and would work in some, but not all, settings. In Schneider’s example, earlier versions of Microsoft Windows randomised the address space, a practice that ceased in Windows 7.

Taking further analogies from public health, Schneider and Mulligan suggest some possibilities: self-checks and self-monitoring for software and hardware and monitoring network traffic at significant boundaries, and coordination among ISPs to defend against attacks such as DDoS or stop machines from connecting that aren’t fully patched. ISPs currently have no incentive to do this. Techniques for filtering encrypted packets need to be studied.

A more complex issue that filtering raises is where to put the filters: should a nation have a firewall and what values should apply? Schneider cited China as a current example of a firewall, albeit in the service of censorship rather than security. Such isolation would be technically difficult to manage: the risk of balkanizing the internet is the protocols within individual pools may diverge and machines may work in some areas but not others.

Schneider concluded by stressing the importance of metaphors. If cyber attacks are crimes, that implies deterrence through accountability, offering no one incentives and leaving the work to those whose job it is to catch and punish the criminals. If cyber attacks are a disease, you take the approach of public cyber security – raising the level of security in all systems is better than raising it in just a few systems.

However, public cyber security will not work in the area where cyber attacks are warfare. The 1980s cold war maintained balance via the doctrine of “mutually assured destruction”, in which rational players would not attack. In the computer world, however, where it’s possible to destabilise the retaliatory attack, we live in a world of “mutually unassured destruction”, in which the incentive not to launch a pre-emptive attack is gone. This, Schneider said, is a different set of problems that needs a different set of incentives. However, the fundamental problem remains the same: that we are deploying untrustworthy systems.

A Bayesian Approach to Portfolios Selection in Multicriteria Group Decision Making

Michael T.M. Emmerich, André H. Deutz and Iryna Yevseyeva

Abstract

In the a-posteriori approach to multicriteria decision making the idea is to first find a set of interesting (usually non-dominated) decision alternatives and then let the decision maker select among these. Often an additional demand is to limit the size of alternatives to a small number of solutions. In this case, it is important to state preferences on sets. In previous work it has been shown that independent normalization of objective functions (using for instance desirability functions) combined with the hypervolume indicator can be used to formulate such set-preferences. A procedure to compute and to maximize the probability that a set of solutions contains at least one satisfactory solution is established. Moreover, we extend the model to the scenario of multiple decision makers. For this we compute the probability that at least one solution in a given set satisfies all decision makers. First, the information required a-priori from the decision makers is considered. Then, a computational procedure to compute the probability for a single set to contain a solution, which is acceptable to all decision makers, is introduced. Thereafter, we discuss how the computational effort can be reduced and how the measure can be maximized. Practical examples for using this in database queries will be discussed, in order to show how this approach relates to applications.

Date: October 9, 2015
Presented: CENTERIS’15, 7th Conference of ENTERprise Information Systems
Published: Procedia Computer Science (vol. 64), 2015, pp. 993-1000.
Publisher: Elsevier
Publisher URL: http://www.sciencedirect.com/science/article/pii/S1877050915027532
DOI: http://dx.doi.org/10.1016/j.procs.2015.08.618
Open Access: http://www.sciencedirect.com/science/article/pii/S1877050915027532/pdf?md5=ef10445dfa5e74219cf1a177ac762fbb&pid=1-s2.0-S1877050915027532-main.pdf

Selecting Optimal Subset of Security Controls

Iryna Yevseyevaa, Vitor Basto-Fernandesb, Michael Emmerichc, Aad van Moorsela

Abstract

Choosing an optimal investment in information security is an issue most companies face these days. Which security controls to buy to protect the IT system of a company in the best way? Selecting a subset of security controls among many available ones can be seen as a resource allocation problem that should take into account conflicting objectives and constraints of the problem. In particular, the security of the system should be improved without hindering productivity, under a limited budget for buying controls. In this work, we provide several possible formulations of security controls subset selection problem as a portfolio optimization, which is well known in financial management. We propose approaches to solve them using existing single and multiobjective optimization algorithms.

Date: October 9, 2015
Presented: CENTERIS’15, 7th Conference of ENTERprise Information Systems
Published: Procedia Computer Science (vol. 64), 2015, pp. 1035-1042.
Publisher: Elsevier
Publisher URL: http://www.sciencedirect.com/science/article/pii/S187705091502760X
DOI: http://dx.doi.org/10.1016/j.procs.2015.08.625
Open Access: http://www.sciencedirect.com/science/article/pii/S187705091502760X/pdf?md5=283a7a6e9e7830cb78266eccf9edc1c8&pid=1-s2.0-S187705091502760X-main.pdf

Addressing Consumerisation of IT Risks with Nudging

Iryna Yevseyeva, James Turland, Charles Morisset, Lynne Coventry, Thomas Gross, Christopher Laing, Aad van Moorsel

Abstract

In this work we address the main issues of Information Technology (IT) consumerization that are related to security risks, and vulnerabilities of devices used within Bring Your Own Device (BYOD) strategy in particular. We propose a ‘soft’ mitigation strategy for user actions based on nudging, widely applied to health and social behavior influence. In particular, we propose a complementary, less strict, more flexible Information Security policies, based on risk assessment of device vulnerabilities and threats to corporate data and devices, combined with a strategy of influencing security behavior by nudging. We argue that nudging, by taking into account the context of the decision-making environment, and the fact that the employee may be in better position to make a more appropriate decision, may be more suitable than strict policies in situations of uncertainty of security-related decisions. Several examples of nudging are considered for different tested and potential scenarios in security context.

Date: September 27, 2015
Published: International Journal of Information Systems and Project Management, September 2015, vol. 3, no. 3, pp. 5-22.
Publisher: SciKA
Publisher URL: http://www.sciencesphere.org/ijispm/index.php?p=5001
DOI: http://dx.doi.org/10.12821/ijispm030301
Open Access: http://www.sciencesphere.org/ijispm/archive/ijispm-030301.pdf

Using IMUs to Identify Supervisors on Touch Devices

Ahmed Kharrufa, James Nicholson, Paul Dunphy, Steve Hodges, Pam Briggs, Patrick Olivier

Abstract

In addition to their popularity as personal devices, tablets, are becoming increasingly prevalent in work and public settings. In many of these application domains a supervisor user – such as the teacher in a classroom – oversees the function of one or more devices. Access to supervisory functions is typically controlled through the use of a passcode, but experience shows that keeping this passcode secret can be problematic. We introduce SwipeID, a method of identifying supervisor users across a set of touch-based devices by correlating data from a wrist-worn inertial measurement unit (IMU) and a corresponding touchscreen interaction. This approach naturally supports access at the time and point of contact and does not require any additional hardware on the client devices. We describe the design of our system and the challenge-response protocols we have considered. We then present an evaluation study to demonstrate feasibility. Finally we highlight the potential for our scheme to extend to different application domains and input devices.

Keywords: IMU, Association, Authentication, Touch interaction, UI design
Date: September 17, 2015
Presented: 15th IFIP TC.13 International Conference on Human-Computer Interaction – INTERACT 2015, 14-18 September 2015, Bamberg, Germany.
Published: Lecture Notes in Computer Science Volume 9297, 2015, pp. 565-583.
Publisher: Springer
Publisher URL: http://link.springer.com/chapter/10.1007%2F978-3-319-22668-2_44
Full Text: http://link.springer.com/content/pdf/10.1007%2F978-3-319-22668-2_44.pdf
DOI: http://dx.doi.org/10.1007/978-3-319-22668-2_44