Ivan Flechais

Ivan Flechais

Oxford University associate professor Ivan Flechais and Norbert Nthala investigated social relationships and their role in home data security, funded by a small grant from NCSC.

The reason for studying the home is that not only is there increasing internet use but both personal and home uses of work and non-work services are growing, and the growth in value that represents is observably attracting people who want to attack those systems, devices, and data. In 2007, Symantec said home users accounted for 95% of all targeted attacks. Originally, the goal was to extract value from home users; more recently these attacks use the home as a stepping stone to attack others, as in the Christmas 2014 attacks using compromised home devices against Xbox Live and the PlayStation Network, and the October 2016 DynDNS hack. This trend means we are at risk from homes and more at risk in our homes. Unlike most organisations, homes lack explicit support dedicated to mitigating threats or keeping software up to date, or procuring and managing end of life. When people need help, who do they call? This research aimed to work out what happens when home users are faced with these issues.

The state of the art in home data security is generally poor. Most of it is automated patching, antivirus (which many people distrust), and a reliance on raising awareness. Awareness will never be an effective strategy for helping all the people in the population of any country. It can’t be the primary thing people rely on – and there’s plenty of evidence to support that.

The study had two phases. The first was a qualitative exploration of how people make decisions based on 50 semi-structured interviews with UK home users that focused on security decision-making and were analysed using Grounded Theory. The second phase used those results to inform a quantitative study to validate and generalise the qualitative findings. The researchers are still studying the statistics derived from 1,032 UK residents.

The researchers found that although the technology industry tends to assume that the owner of a device is also its competent administrator, this is generally not true for home users. The result is a lot of informal networking. Those seeking help look for competence first and foremost – but not necessarily real competence so much as perceived competence. These users also list trust and continuity of care. People strongly report wanting a consistent source of adequate, personalised advice. Raising awareness generally targets the whole population, but what people actually seek is targeted and individualised help that’s consistent over time. People demonstrate a strong sense of responsibility for the advice they give, and the consequences if it’s wrong. How do we know what good-quality advice looks like, particularly in an informal setting?

In his survey of 1,032 participants, Flechais and Nthala find that people leverage their existing informal and social relationships. The most frequently-named choice is someone who works in data security, closely followed by those who have studied it. Third, they name people who have more experience than they have working with technical devices and services. The rest of the list: people who have experienced a prior data security incident; have taken a technical course; works for a technical company; has a technical job. This perception of competence includes the likelihood that someone will copy or adapt another person’s security practices if they’re perceived to be more competent – an idea of relative competence that’s interesting – or accept or offer unsolicited security advice.

People also crave trust. The choice of a source of advice, a particular service, and the extent of sharing devices and services are all influenced by trust. People respond to cues such as brand recognition, social relationships, and visual cues such as the SSL padlock, HTTPS, and logos.

Continuity of care – continuing availability influences people’s preferences with respect to sources of help. When seeking help, they will pick friends over relatives, though not by much, then work mates, then service providers, and finally an IT repair shop. People exploit their social networks, in other words, an intriguing choice since the people they consult might be completely incompetent, and their own incompetence in assessing competence is also an issue. Even so, they tend to choose the informal options first.

Flechais and Nthala found there is a complex culture around responsibility and duty of care. Home users take initiatives to protect themselves, but some also assume responsibility for others, though they are far more likely to offer unsolicited advice to family members than to friends. Those who offer advice feel the need to make good on situations where they have offered bad advice, a responsibility that’s determined by the social relationship.

To evaluate the quality of the security advice they’re given, home users rely on their perception that it’s something that a competent person does or recommends. Less reliably, however, they also fall prey to survival/outcome bias: nothing bad has happened, therefore it must be good. This fallacy – just because you haven’t been breached doesn’t mean you’re doing the right thing – was found in interviews, though not confirmed in the survey because of the difficulty of testing a bias. This bias underpins inaction, however, and is worth exploring in greater detail.

In comments, Angela Sasse noted that she and Monica Whitty are finding in the Detection and Prevention of Mass-Marketing Fraud project and in work with Gumtree that a lot of users exchange (often not very good) advice in the forums. Another small grant project interviewed people who have just bought a new laptop or phone on the subject of security, and this project has found a surprising number of people who pay someone to come round once a quarter or once a month to perform updates and check their systems. How qualified these aides are is unknown.


Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries