photo by Simon J Brady
photo by Simon J Brady

Have you ever heard of “The Trolley Problem”? It’s a thought experiment designed to make you consider what you would do in a situation where there is no good outcome, and whether you would attempt to make the outcome less bad.

There are some variations on the experiment, but here is the basic problem: A trolley is rolling down a hill and its brakes are out. There are five people on board, and these five people will die if nothing is done. You are standing by a switch in the track that would divert the trolley to another track where there is a sand pit that would stop the trolley and save the five people. The problem? One person is standing on the track in front of the sand pit. That person will die if you choose to throw the switch and divert the trolley away from the track where the five people are currently careening out of control. What do you do? (No, you can’t yell at anyone to get out of the way, and no, you can’t sacrifice yourself.) To reiterate, your choices are these: Let the trolley alone and allow five people to die, or choose to throw the switch to divert the trolley and make it hit and kill just the one person on the track but save the five on board.

This scenario invites the thinker to consider whether they would let fate be or if they would consciously act in order to subvert it. More consequentially, it requires the thinker to consider the subtle moral ramifications of choosing to kill someone versus letting them die while the thinker remains passive.

While the Trolley Problem is interesting to chew on for novice and advanced philosophers alike, these days it’s also being used by engineers and programmers of autonomous vehicles. Lauren Cassani Davis of The Atlantic outlines the use of the Trolley Problem in mechanical engineering:

Chris Gerdes, a professor of mechanical engineering at Stanford and the director of their automotive-research center, has spent years on algorithms for automated vehicles, figuring out how these cars should handle emergencies and make decisions acceptable to society. When I spoke to Gerdes, he had just returned from a test drive on the California streets. He explained that many of the situations driverless cars will face involve conflicting priorities. When a vehicle has no option but to have a collision, which collision is it going to have? This is where trolleys come in.

“These problems we were trying to solve were not simply technical issues, but things that you could actually turn to philosophy for some insight,” Gerdes told me. “In those cases where loss of life may be inevitable—and there will be situations like that—we want the car to make a reasonable decision.” Gerdes also thinks the trolley problem is a useful springboard: “[It] is one way of highlighting the fact that you eventually reach a point where you have to make some decisions, and not everybody will agree.”

Truly, if I don’t know how to answer this question, how can I possibly expect a vehicle to? Some people point to the idea that even though autonomous cars won’t be able to solve this problem and will inevitably kill people, their overall benefit would eclipse the times when that decision would have to be made. The National Safety Council says the number of motor-vehicle deaths in 2016 totaled 40,200. What I want to know is, what is on the other track if I throw the autonomous-vehicle switch?—Sara Lacey

Liked it? Take a second to support Carsplaining on Patreon!

Leave a Reply

Your email address will not be published. Required fields are marked *