Tom Sorell leads the Interdisciplinary Ethics Research Group at the University of Warwick. In his talk, he sought to set out a general sense of what ethics is. He broke it down into three parts: ordinary moral training; the theory that ethicists develop; and the application of that theory to practices and situations that are unanticipated in moral training.

Ordinary moral training is the basis of ethics, and it consists of teaching Dos and Don’ts in specific situations. Don’t lie, Don’t steal, and Don’t break promises are all examples. The set of the most common Dos and Don’ts are the code adults teach children, primarily at home and at school, mostly unwritten, and to some extent it teaches not to follow certain strong natural inclinations we’re all born with. Ordinary moral training becomes less explicit in adulthood, and becomes less domestic as people join professions and are exposed to professional codes, or are influenced by campaigns about their obligations to, for example, animals, the environment, the poor, and the displaced.

At the beginning, Dos and Don’ts are either not explained or explained in an ad hoc way. Moving to backing such instructions with general reasons is a crucial transition, in part because huge areas of morality – international affairs, for example – are left out of moral training and yet are covered by some of the general reasons for Dos and Don’ts in the domestic sphere. Moral theory helps to articulate general reasons for Dos and Don’ts across a wider area than moral training. Indeed, it aims to come up with judgements of rightness and wrongness in all spheres of human life. “It seems ambitious, but there’s been 3,000 years of work on this, and it’s not outrageous to say we have very developed moral claims about all of these spaces,” Sorell said.

Moral theory generalises from the specific instances covered in moral training to create classes. For example, one might be told not to lie, steal, or break promises at various times in moral training; moral theory would group these together by classifying them as examples of disrespecting persons and derive the principle that we should respect persons. Alternatively, moral theory might say it increases welfare not to steal and break promises, and that we ought to maximise welfare. There is a strong analogy between this process and the way observation and theory work in science: if the value judgements are plausible they may support the acceptance of the principles that imply or justify those judgements.

Sorell went on to discuss several types of (normative) moral theory: utilitarian, Kantian, contractualism, and virtue.

Normative moral theory continues the role of filtering mechanisms from moral training: it filters out bad inclinations by providing general reasons for not acting on those inclinations.

Utilitarianism holds that what’s right is what maximises welfare and well-being. That is, values such as freedom from pain, poverty, and fear of others; being educated; being healthy; having access to healthcare. A lot of these are among the generally agreed-upon basic human rights.

Kantian theory holds that anything is wrong that disrespects the rational autonomy of human beings, where “autonomy” is defined as being able to act in a way that everyone could in the same circumstances. The theory implies that the height of immorality is making yourself an exception to a rule everyone could and should adopt.

Contractualism, espoused for example by Harvard’s Tim Scanlon, is similar to the common elements of utilitarianism and Kantian theories: what’s right is that no one affected by a policy could reasonably object to it. Grounds for reasonable objections include personal harm, incapacitation or decapacitation, disproportionate burdens, and discrimination.

Virtue theory, which goes back to the Greek philosophers, identifies as right whatever someone with the cardinal virtues of justice, self-control, courage, and wisdom would do.

Each of these theories has areas where it’s strongest: virtue and Kant for small-scale interpersonal issues, utilitarianism and contractualism for public policy issues where numbers affected matter most, contractualism for public policy issues in which coercion and autonomy matter. All of them, however, have much greater scope than ordinary moral training, and many professions – notably the judiciary – are covered by them. They also apply to the topics of single-issue campaigning on topics such as animals, the environment, immigration, and poverty, where reams have been written about the rights and wrongs.

In developing moral theory, philosophers construct arguments that make analogies from non-controversial settled cases to new cases that are the subject of debate. If we could say that data ethics is similar to medical ethics, we could draw on a long tradition of cases and principles. However, the two are nothing like one another, and such an effort fails. Philosophers also evaluate theories and their failures of application; one such case might be writing an article explaining why medical ethics can’t be used to understand the ethics of data collection.

In answer to questions that came up in response, Sorell explained that philosophers decide first whether something is prima facie wrong or right before ranking the considerations that matter in a reaching a final conclusion. We see this at work where bugging someone’s bedroom – prima facie wrong – might be justified on balance in the investigation of a very serious crime even though it would be unacceptable for investigating a traffic violation. There is an aspect of learning as circumstances change over time in some writing on utilitarianism, and balance is incorporated into categories of welfare. Contractualism and utilitarianism differ over the importance of the individual: utilitarianism will accept that one person may suffer disproportionately if the gain to everyone else is very high, whereas contractualism denies this.

This talk was presented at the November 24, 2017 RISCS meeting, “Ethics, Cybersecurity, and Data Science, hosted by the Interdisciplinary Ethics Research Group at the University of Warwick.

Wendy M. Grossman

Freelance writer specializing in computers, freedom, and privacy. For RISCS, I write blog posts and meeting and talk summaries


Kat Hadjimatheou: Ethics and algorithmic decision-making | RISCS · 01/02/2018 at 17:48

[…] Ethics Research Group at the University of Warwick, discussed how her group applies Tom Sorell’s outline of ethics to emerging technologies such as profiling with big data, where there are both good and bad […]

Lizzie Coles-Kemp: Cyber security for whom? | RISCS · 22/02/2018 at 13:21

[…] – “freedom from” versus “freedom to”. This collision is apparent in Tom Sorell’s earlier talk framing philosophical theories. You can argue about contractualism and utilitarianism, but you can […]

Comments are closed.