The Ethical Dilemma Behind Self-Driving Cars

Juliette Fang, Staff Writer

Imagine that you’re driving down the road when you see a pedestrian walk right into your path. Without time to brake, your only two options are to hit the pedestrian or risk yourself by swerving into a concrete barrier. Whichever choice you make at this moment will require an ethical judgment. Is it better to risk yourself or someone else? What if the person is a child, or what if they’re crossing illegally?

Soon, machines may have to make these decisions for us. Once an idea straight from the pages of science fiction, self-driving cars are becoming a reality through advancements made by companies such as Google and Tesla. These autonomous vehicles give cars a sense of their environment in order for them to avoid crashes and steer safely without human control. 

However, the rise of autonomous vehicles in our everyday lives presents an unexpected ethical dilemma. While taking human error out of transportation will make roads safer, nothing is foolproof, and unavoidable accidents will still occur. In instances like the one above, programmers will be responsible for making robots react in a specific way in advance, for a decision that would be completely random and immediate in humans. Unfortunately, there is no single answer for how this issue should be approached. 

Opinions on how self-driving cars should be programmed to react vary, as shown in a study conducted by MIT. The study presented people from across the globe with 13 scenarios similar to the one above through a website called the Moral Machine. In an effort to see if there was a common trend in how people thought autonomous vehicles should react, the conductors of the study asked the people which option they would choose. However, the results varied greatly across different regions and backgrounds, showing that there was no universal, agreed-upon moral rule for unavoidable crashes. 

For example, people from places with stronger governmental institutions, such as Japan or Finland, were more likely to favor hitting an illegal crosser than citizens of nations with weaker governmental institutions, such as Pakistan. Although a few constant trends, such as sacrificing a smaller group over a larger group were seen, most people disagreed on which path a car should take. Without an agreement on what decisions should be made, autonomous vehicles will be unable to fulfill the moral codes of different groups of people. 

Freshman Hazel Wong said, “I think [the study] gives a lot of insight on personal values,” but she agrees that “people need to find a consensus, because it’s going to be very inconvenient if the self-driving car swerves one way and a select portion of people are unhappy with the response of the car.” 

Even simple parameters, such as “minimizing harm,” delves into murky territory. Should the car minimize harm to the driver and passengers, or should it minimize harm to pedestrians? Some scenarios may require a choice between two demographics, where the human cost is the same, therefore taking “minimizing harm” entirely out of the equation. 

Self-driving car crashes also have legal ramifications. In the event of an accident like this, there is the question of who should be held responsible. If the vehicle is operated by a human, the result of the crash is a reaction spurred on by panic. But in a self-driving car, the choice is predetermined by programmers and is much more deliberate, complicating things. 

“What makes this interesting with self-driving cars is that you actually have to encode these kinds of decisions into software,” said Noah Goodall of the Virginia Center for Transportation Innovation & Research, in an interview with BBC. “We don’t really understand why people make these kinds of decisions…that’s what makes this so complex.”

While these scenarios may seem like simple hypotheticals, they are important considerations for autonomous vehicle developers to keep in mind. And as self-driving cars become more and more advanced, ethical dilemmas like these may become even more relevant in our everyday lives.

 

Photo by Jonas Leupe