In modern ethical discussions, the most famous thought experiment is known as the Trolley Problem, although there are numerable variations. It has been used to frame discussions of virtue versus consequentialist approaches to ethics, a source of human intuitions to test in experimental philosophy and plenty of discussion about legal and ethical issues with autonomous cars.
For a reminder of the problem, or if you haven't seen it yet, here is a classic example:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:
1. Do nothing, in which case the trolley will kill the five people on the main track.
2. Pull the lever, diverting the trolley onto the side track where it will kill one person.
Which is the more ethical option? Or, more simply: What is the right thing to do?
Readers should take note of their intuitions and responses for later reference.
Obviously this is a contrived case that you are unlikely to ever have to face, although there are more plausible examples and we will cover a real, but extremely rare, one later. The point of a thought experiment is to test our thinking and intuitions by providing a plausible, but generally unrealistic, edge case. However, a useful thought experiment for ethics has to be not too far away from a normal decision making context and the Trolley Problem is based on an important abstraction from reality: it assumes epistemic or predictive certainty.
The trolley problem, as normally constructed, assumes that you know with certainty what the possible outcomes are and therefore your choice is between two certain outcomes. However, there is a large volume of literature on the unreliability of human prediction: perfect foreknowledge is rarely possible. This means that our ethical decisions are almost always made without being certain what the outcomes will be.
Does uncertainty change the moral calculus?
The addition of uncertainty to ethical decisions might seem irrelevant, unless it changes our moral intuitions and the decisions that we would make. My hunch is that it would, but I haven't seen any research on the topic, so I would be grateful for feedback from readers on the topic.
In the absence of known research, let’s explore some intuitions via an alternative version of the trolley problem with uncertainty thrown in:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people who are looking in the other direction. If you pull the lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:
Do nothing, in which case the trolley will likely hit and kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will likely hit and kill one person.
Does this change your intuitions about the problem? Does it feel like a genuine difference to the morality of the situation? Let's increase the uncertainty for one more case:
There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people who are looking in the other direction. If you pull the lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:
Do nothing, in which case the trolley might hit and kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it might hit and kill one person.
One hypothesis is that, even though on a numerical consequentialist assessment of the risks the balance is the same in all three cases, people would be more comfortable pulling the lever when the outcome is uncertain. The intuition is that, as you won’t definitely kill someone if you pull the lever, the barrier to to acting decreases.
An alternative hypothesis is people are less likely to pull the lever as there is less impetus to act if the outcomes are uncertain. Let’s test the intuitions of my readers about how their actions would change with a quick poll:
More in depth comments or reactions are also welcome.
The original Trolley Problem is known for countless variations and adding in different sorts of epistemic uncertainties can increase these greatly. I may cover some in a later post if there is interest. Cases where the uncertainty is different between the two options look particularly interesting as I suspect people would tend to choose the option with greater uncertainty. However, hopefully it is now at least plausible that the epistemic certainty attached to different outcomes is relevant to our ethical decisions.
Consequences of uncertainty
The modern version of the Trolley Problem was first raised by Philippa Foot in the 1960s as a way of arguing for virtue ethics and against consequentialist approaches. Her argument was that actively pulling the lever is a morally different act to not intervening and that people would be uncomfortable actively killing the one person. For this argument, the complete certainty of outcomes seems necessary as increasing uncertainty lowers the cost of pulling the lever.
However, taking a bigger picture, adding uncertainty into our ethical decision making causes broader problems for a consequentialist approach. Consequentialist ethics, put simply, require us to consider the outcomes of an action and choose the one that produces the most good to the most number of people (or causes the least harm to the least number).
However, if we don't know what the outcomes will be with any certainty - and especially where different people have different views on what the outcomes will be - then a rigorous consequentialist ethics becomes very difficult. How do you weigh up the relative outcomes between outcomes that might, could or probably will eventuate?
Once we factor in uncertainty about our knowledge and the future, deontological (rules based) or virtue ethics seem like attractive alternatives. At least they don’t face the same epistemic uncertainties in moral decisions.
This issue also plays out in a different way. If we frame an ethical decision in consequentialist terms, this can lead us to assume certainty about outcomes and ignore or downplay the predictive uncertainty involved. We can illustrate this with a historical decision that was justified on consequentialist grounds.
Harry Truman justified his decision to drop the atomic bombs on Hiroshima and Nagasaki in 1945 in a way that is analogous to the trolley problem: he had to choose between killing around 100,000 people through the bombs or stopping “the war that would have killed a half a million youngsters on both sides if those bombs had not been dropped.”
Framed in these terms, the decision is entirely rational. However, one vein of criticism of the decision is that it was comparing a near certainty that the atomic bombs would kill that many people versus a far more uncertain prediction that would depend on future decisions by Japanese and American leadership. To push this thinking to an extreme, killing 100,000 to prevent the possibility that 1,000,000 will die seems like a poor decision, whereas killing 100,000 to prevent 1,000,000 deaths is very rational.
The point here is not to argue which accurately describes the situation but to point out that the epistemic framing of an ethical decision matters significantly. Being clear about what we do and don't know, and how confident we are in that knowledge, is a relevant input to how we frame and make ethical decisions.
I understand why uncertainty about our knowledge and the future makes a consequentialist approach to ethics difficult, but don't we still face epistemic uncertainty when it comes to deontological or virtue ethics? How can we be confident about the deontological rules we subscribe to? Incorrectly predicting the consequences of an action can produce a bad result in the particular case under consideration, but applying general rules in determining ethics can have much broader ramifications than the single case of a wrong prediction.
Uncertainty matters, unless it doesn't. If 100% of people indicated that they would not change their decision (and this held true in reality) what would we say.