Blog

Kat Hadjimatheou: Ethics and algorithmic decision-making

Kat Hadjimatheou

Kat Hadjimatheou (Warwick)

Kat Hadjimatheou, a researcher with the Interdisciplinary Ethics Research Group at the University of Warwick, discussed how her group applies Tom Sorell’s outline of ethics to emerging technologies such as profiling with big data, where there are both good and bad criticisms from the point of view of autonomy and fairness.

Many projects Hadjimatheou’s group participates in involve practitioners from various types of organisations – police, security, technology companies, local authorities. Often, these practitioners expect that if they are unlucky enough to have to work with ethicists they will always be hearing the word “no” from the “moral police”. Data scientists and start-ups are particularly frustrated by this when they’re excited about their latest discovery; the same is true of the police, who want to be able to do the same things criminals can do. Sometimes ethicists just say “yes”. More often, however, their response is to suggest thinking more deeply about the potential problems. By analysing these and working out which are the most compelling, they try to think of ways to address them in advance.

An important part of this work is recognising the roles and responsibilities of different actors in the digital sphere. For example, often criticisms of profiling don’t distinguish among those who are doing it. Yet this is important: a company has a different duty to clients than a police force does to members of the public in a democratic society. Hadjimatheou’s group tries to weigh the arguments and take into account who they’re talking to and what their competencies and roles are.

Profiling with big data is used in many ways. First, mining big data makes it possible to find useful patterns and correlations that are imperceptible by other means.

Second, profiling is used in risk assessments in applications such as insurance, criminal justice, and border control . Hadjimatheou offered three examples drawn from American Express, the Chicago police, and IBM. In the first of these, a businessman who had been an exceptionally good customer for many years found his American Express card credit limit massively reduced simply because he had shopped in a store in which people with poor credit ratings also shopped. This was only revealed to him by the credit card company after he applied a lot of pressure, including via the mainstream media.

The Chicago police use algorithms to identify individuals at high risk of being involved in gun crime, and use the results as a basis for attempting to intervene. It’s not clear if the results are given to police responding to emergency 911 calls, or what other agencies have access to the system. London’s Metropolitan Police has a similar effort, though one that’s less algorithmic and data-driven. In London, when the Metropolitan Police estimates that someone is highly probably a gang member, the suggestion is shared with housing, social services, and youth services, and these disclosures have affected people’s access to services, including housing.

Finally, IBM Enterprise Intelligence is advertised as being able to tap into many sources of data, including dark web data, to make risk assessments of individuals applying for asylum. The company doesn’t specify how its results may be used.

In order to serve ethical values such as autonomy and fairness, respect people’s capacity to choose how to live their lives, and grant them the opportunity to pursue their choices, all these decisions should be made for transparent reasons.

There have been a number of criticisms of each of these types of profiling. Some of these criticisms misfire, and miss what is really at stake. For example, the ACLU calls American Express’s use of behavioural scoring economic guilt by association. The Electronic Frontier Foundation objects on the basis that this “minority report” approach treats people as if they have already committed a crime. Hadjimatheou argues, however, that the “minority report” argument would only be valid if being on the list led to arrest and a presumption of guilt – because that is how people who have committed a crime are treated. Treating people as more likely to commit a crime is different, and the legitimacy of that treatment depends on the strength of the evidence on which it’s based.

Hadjimatheou’s suggested alternative approach is to look at the case through a contractualist lens to consider what complaint someone might make. The American Express decision left the man with no opportunity to adapt his behaviour to change his risk score, and the fact that the score was used at all, given his lengthy history with the company, confounded reasonable expectations. The decision fails at fairness; the questionable and obscure basis for the risk score offers no opportunity for challenge, and means that people are treated as poor credit prospects for potentially unsound reasons.

The “guilt by association” criticism levels the same objection at all three of the above cases, but fails to acknowledge that the practical implications differ between contexts. The disadvantage and unfairness inflicted as a result of profiling are worse in the Chicago case, for example, where they affect people‚Äôs access to basic goods such as housing, than in the American Express case, where the decision does not close off alternative channels for securing credit. As a general principle, however, these systems should give people opportunities to challenge the results and explain the basis of the scores.

In response, an attendee had realised while working in the data mining industry that they didn’t have a scientifically understandable model that they could use to defend themselves in court if someone dies because the underlying correlations only surface after the fact when questions are raised. In those cases, it’s never clear whether the reason given is believable. So the question is how to make these systems accountable if we can’t trust businesses to tell us the truth. Answering these questions is still in early stages. Hadjimatheou noted that police increasingly rely on private companies to monitor and analyse the dark web, some of which have a monopoly on historical data. As a result, some police are saying they’re not sure whether the evidence they have is reliable and will hold up in court. Hadjimatheou thought there might be an analogy to white-hat hackers testing cyber security that could be used as a model for testing algorithms. A commenter added that finding out about problems after the fact is “part of the process of getting algorithms to match our moral sensibilities, which are changing.”

Angela Sasse felt that it’s essential to be aware of and address some of the human fallibilities in the learning itself. Objections to algorithmic systems are often batted away by saying that humans are more prejudiced than algorithms, a claim that needs to be challenged. It’s essential to get to the point where we can ensure that there is sound evidence that a prediction is valid. Law enforcement, if asked what data is useful and how long it should be kept, feel morally justified in saying “everything” and “forever” because it might be useful. A respondent with knowledge of the intelligence services disputed this statement, saying that law enforcement obeys the law, which prohibits keeping data forever, and that everyone should be aware that these are issues that law enforcement discuss among themselves all the time. This led Tom Sorell to ask how ethicists can talk to people. The more generally ethicists speak, the less people understand them. The more they “go native” with the intelligence services and the police, the less they’re normative.

This talk was presented at the November 24, 2017 RISCS meeting, “Ethics, Cybersecurity, and Data Science, hosted by the Interdisciplinary Ethics Research Group at the University of Warwick.

About Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries