Blog

Paul Iganski: Ethical debates for practical application

Paul Iganski (Lancaster) is associated with the Centre for Research and Evidence on Security Threats (CREST), where he chairs the Security Research Ethics Committee

Paul Iganski

Paul Iganski

CREST is a multi-university national hub for research on security-related matters. It is commissioned by the ESRC with funds in part from the UK security and intelligence agencies.
In chairing the ethics committee, Iganski is quasi-independent; his job is to ensure that all research associated with the Centre follows good ethical practice. Any project funded by CREST or associated with it must go through the project’s home institution’s ethical procedures, and then CREST’s. On average, the CREST Security Research Ethics Committee receives six applications per month, and to date few applications have passed through the committee without queries and suggestions related to the ethical concerns affecting security research.

Iganski discussed three of the areas that have generated the most debate in the committee:

  • Ethical concerns regarding secondary analysis of research data; this might be a traditional small, closed data set from empirical research.
  • Concerns around confidential data about people, access to which the stewards or holders of that data might open up, such as police records, victim statements, suspect interviews, Crown Prosecution Service records (with prosecutors’ reflections), court records, and records from the probation services and others; not all of this is in the public domain and it represents a vast amount of records that criminological and other researchers access.
  • Open source big data, such as public online interactional data from social media, primarily Twitter, but also other sites like Facebook.

For the purposes of discussion, Iganski began by assuming that in each of the above scenarios the data providers had not given their specific informed consent for their data to be used in the projects. In such cases, an ethics committee serves as a proxy research participant and, on behalf of the original data provider, makes an informed decision about whether to participate in this new use of their data. There are also many studies where consent was never obtained for future reuse of data. Rather than take the absolutist stance that all such reuse should be barred and given that in this type of work the original respondents can’t be contacted to give their consent, an ethics committee again serves as a proxy and decides on a case-by-case basis.

Iganski outlined a hypothetical case in which the original participants were not asked and the applicants wanted to use the data. The first concern is the potential for harm if the individuals’ anonymity is betrayed, which goes far beyond a broken promise. So the committee asked questions: What did these individuals agree to participate in? Did they know that their data might be reused in future in security-funded projects? Because these projects may have included extremists and terrorists, the people concerned might still be subjects of interest to the authorities.

In such cases, the committee has felt it was incumbent upon them to do more than speculate, and where possible, to canvass views from people similar to the original participants. In one such case, those surveyed were unequivocally against reusing the data, even decades later. When reviewing applications, Iganski’s group makes clear that consent forms must explicitly state the source of funding for the research, as well as whether the researchers anticipate that the data will be reused in future, whether by themselves or others.

Given today’s widespread use of social media, there is also a very real danger that the subjects can be reidentified via sophisticated searches linking their profiles to verbatim quotes appearing in academic publications. Twitter’s terms and conditions allow quotation, both in academic papers and in the media, but they require that the full text of the tweets including the account holder’s handle must be published with no editing. For this reason, researchers concerned with hate speech have used published Twitter posts without obtaining informed consent from the account holders. Given the lack of privacy protection, these individuals can potentially be identifiable beyond the consent they intended to give, and publication may expose them to harm such as stigmatisation, ostracism, and physical harm.

Nonetheless, social media provides a valuable reservoir of unsolicited public opinion on discriminatory comments for those researching hate crimes and hate speech. In such research, an ethics committee serves as a proxy respondent for these account holders. If asked for their consent to publish their Tweets, many would likely refuse. In fact, a November 2017 study by Matthew L. Williams, Pete Burnap, and Luke Sloan (Cardiff), published in the British Journal of Sociology, reported that in an online survey of over 500 Twitter users, the vast majority (80%) said that they would expect to be asked for their consent before their Tweets were published in academic outputs. Also: an even bigger majority (over 90%) would want to remain anonymous.

Iganski noted that some academics might argue that those posting racist speech on social media deserve whatever they get if they know they are publishing it in a public forum. Iganski’s view, however, is that, just like offline, many incidents occur in the heat of the moment, often with alcohol involved, and people often later regret what they have said.

In the subsequent discussion, a commenter noted that journalistic and academic ethics diverge in this area. Journalists assume that a tweet is a public statement that can be used under fair use; tweets are commonly quoted without permission in highly public settings. Iganski responded that by contrast academics have a longer commitment to the principle of consent and the protections of anonymity that go with it. Others thought the method used was interesting, but wondered how the committee would have handled it if the people they consulted had been divided down the middle or the Twitter users had been mixed in their opinions. Iganski said the committee would have had to make a judgement call bearing that evidence in mind.

This talk was presented at the November 24, 2017 RISCS meeting, “Ethics, Cybersecurity, and Data Science, hosted by the Interdisciplinary Ethics Research Group at the University of Warwick.

About Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries