If a Car Can Drive Itself, Can It Make Life-or-Death Decisions?

If a Car Can Drive Itself, Can It Make Life-or-Death Decisions?

What would Aristotle think about self-driving cars?

As the abilities of artificial intelligence systems to automate complex tasks accelerate, warnings about the dangers of outsourcing life-and-death decisions to machines are pumping the brakes on the powerful technologies. From Emmanuel Kant to John Stuart Mill, philosophers have wrestled with the age-old questions autonomous vehicles are now raising—in new and urgent ways—for businesses and their leaders.

“And by genuine ethical decisions, I mean decisions about the rights of other people, about consequences and risks and benefits for other people.”

“At some point soon, genuine ethical decisions will be delegated to these systems—to some extent, they are already,” says Joseph Badaracco, the John Shad Professor of Business Ethics at Harvard Business School. “And by genuine ethical decisions, I mean decisions about the rights of other people, about consequences and risks and benefits for other people, and also in a murkier but important sense, the ethics of an organization’s culture and its values.”

Elon Musk and Tesla are at the center of a
recent HBS case study that essentially asks: How can machines make complex decisions that prioritize one life over another? Does a self-driving car swerve to avoid a pedestrian if it means the driver or passenger gets injured, and how does that calculus change if the pedestrian is jaywalking or in a group, elderly or a child?

Applying ‘the trolley problem’ to AV design

It’s a modern version of what philosophers call “the trolley problem” and pits fallible humans with brains and souls over machines that can’t overthink a situation or get tired behind the wheel.

The “trolley problem” originated in the 1960s from an Oxford philosopher, who used an out-of-control train to study the ethics of human decision-making. It goes like this: A theoretical trolley is speeding down the track toward a group of people; the protagonist is next to a lever that, if pulled, will divert the trolley to a second track where just one person is in the train’s path. A philosophical debate ensues about minimizing the number of deaths versus refraining from murder; actively killing one person or passively watching five die.

A twist examines whether a protagonist would choose to push a person heavy enough to stop the train into the trolley’s path, slowing its speed to save those down the track. The case quotes Harvard University psychology professor Joshua Greene saying, “Were a friend to call you from a set of trolley tracks seeking moral advice, you would probably not say, ‘Well, that depends. Would you have to push the guy, or could you do it with a switch?’”

Fast forward to AV design, and some companies hard-code ethical values—for example, the numerical value of a human life—into an algorithm. Others set up self-teaching AI to “learn” what risks the AV can take. Countries often want automated systems to adhere to local values; advocates usually seek universal norms.

The public lambasts carmakers that say their AVs will prioritize passengers’ lives over pedestrians, but research shows most customers will not step inside truly egalitarian vehicles. Marketing hype spurs adoption, a necessary step for training AI systems, but customers who overestimate AV capabilities may crash and sometimes die.

AI, AVs, and saving lives

With less emotion behind the wheel, Musk and Tesla argue that automation could address real-world safety problems. As the case explains, motor vehicle crashes kill 40,000 people in the United States yearly.

Tesla’s semi-autonomous Autopilot drivers recorded 0.18 accidents per million miles, compared to the US average of 1.53 accidents per million miles. The technology’s critics, though, home in on the 17 deaths involved with Autopilot since its 2019 rollout, saying it’s still too dangerous to unleash more broadly.

At a 2022 Tesla event, Musk was quoted as saying:

“At the point [at] which you believe that adding autonomy reduces injury and death, I think you have a moral obligation to deploy it even though you’re going to get sued and blamed by a lot of people. Because the people whose lives you saved don’t know that their lives were saved. And the people who do occasionally die or get injured, they definitely know—or their state does.”

While not all AI ethics debates carry stakes as high as those related to AVs, many raise important social questions. In the world of content, for instance, some laud AI’s ability to produce writing that could pass as human; others point to biases and microaggressions in computer-generated speech.

Google Maps for ethical leaders

Badaracco worries that leaders are caught up in the hype about what machines could do and not focusing enough on what they should
do. From utilitarianism and deontology to neural networks and emergent behavior, Badaracco proposes a roadmap for thinking about ethical ways to delegate decisions to machines.

“It’s a potentially catastrophic mistake, with technology like this, to have that mentality.”

Ask stupid questions. “People don’t want to look dumb in board meetings or in our classrooms,” Badaracco says, “but you have got to be willing to be a little dumb on this stuff.”

Managers cannot be experts on everything, but AI represents such a foundational, significant change to decision-making that they must try. “When you’re in meetings with the people who are designing these systems, you have to be really persistent in asking questions,” Badaracco says. “They may be dumb questions from the viewpoint of whoever is doing the coding or the design, but you’ve got to get answers, and you have to be satisfied with the answers.”

Don’t move fast or break things. Many Silicon Valley startups have adopted the “fail fast” approach of releasing innovative products quickly rather than waiting for perfection.

Badaracco disagrees: “It’s a potentially catastrophic mistake, with technology like this, to have that mentality. You really have to err on the other side, given the uncertainties, the potential damage, the fears, the media spotlight, and your own sense of responsibility. You want to introduce quality, reliable products that you’re proud of and that customers can use safely and effectively.”

Consider utilitarianism and the “march of the nines.”
A utilitarian, weighing the morality of an action based on its consequences, might see AI as an improvement over the status quo of 40,000 annual motor vehicle deaths. AVs are good at avoiding mistakes, and most of their crashes are caused by human error or rare scenarios.

However, Badaracco says, “Working 99 percent of the time isn’t good enough.” By some calculations, human drivers are 99.999819 percent crash-free. “Before there are going to be genuinely autonomous systems that the public accepts and that regulators permit, there will have to be a lot of nines after the decimal point,” Badaracco says.

Deontology teaches “aggressive responsibility.” A deontologist, focused on universal duties and morals, might concentrate on leaders’ responsibilities to their teams and customers.

“When you’re beginning to develop an AI product or service, embed ethical considerations from the very start,” Badaracco says. “These are boundaries or parameters you don’t want to cross. They could involve privacy, safety, or bias. Tell your team which issues you want to hear about as they arise.”

Leaders who delegate tasks to human subordinates are still responsible for the consequences; delegating decisions to an AI system doesn’t absolve leaders of responsibility, either. “Be vigilant,” Badaracco says. “Try to control the uses of what you produce, and develop a value, or a culture, of aggressive responsibility.”

Learn a little from monks and psychopaths. Badaracco goes back to the trolley problem. When participants could flip a switch to divert the trolley, Greene showed
that the logical parts of their brains activated, and they chose the utilitarian route of killing one to save many. When they had to push another person into its path, the emotional parts of their brains activated, and they took the deontological route of refusing to kill.

“You need multi-disciplinary involvement from the very beginning.”

Buddhist monks and psychopaths stayed logical regardless of circumstance, and Badaracco advocates for a less extreme version of that mindset.

“You want to strip away fears, excessive hopes, and hype. To a significant degree, you want to look at things objectively,” he says. “You’re in a sphere where your experience, things you have done and did well, mistakes you have made, and your thoughts and feelings about this, are all going to commingle, and come into play. And that’s the nature of judgment. But you want your judgment, as much as possible, to be sequestered from things that can really distort it.”

Be collaborative. Badaracco advocates for a holistic view of AI. “If it’s heavily marketing- or technology-oriented, that could go down the wrong path. You want people who are sensitive to cost, to marketing considerations, you may want somebody with a relevant legal or regulatory background. You need multi-disciplinary involvement from the very beginning,” he says.

Including other people is not just helpful—it is ethical. “I don’t think you want to make these decisions yourself,” Badaracco says. “You want really good, honest back and forth with thoughtful, knowledgeable people. We don’t fully understand how these AI programs work, but we don’t fundamentally understand how our own brains make decisions either.”

Tom Quinn is a case researcher at HBS and coauthor of the case “Automating Mortality: Ethics for Intelligent Machines.”

You Might Also Like:

Feedback or ideas to share? Email the Working Knowledge team at hbswk@hbs.edu.

Image: Image by HBSWK with assets from AdobeStock/Iuliia and AdobeStock/ZeyBer

Originally Appeared Here