An MIT professor explains why we are still a long ways off from solving one of the biggest problems with self-driving cars

“The idea of a robot having an algorithm programmed by some faceless human in a manufacturing plant somewhere making decisions that has life-and-death consequence is very new to us as humans”

Rahwan helped bring it to the surface in October 2015 when he co-wrote a paper “Autonomous vehicles need experimental ethics.”

But the debate arguably got to the forefront of discussion when Rahwan launched “MIT’s Moral Machine” — a website that poses a series of ethical conundrums to crowdsource how people feel self-driving cars should react in tough situations. The Moral Machine is an extension of Rahwan’s 2015 study.

Rahwan said since launching the website in August 2016, MIT has collected 26 million decisions from 3 million people worldwide. He is currently analyzing whether cultural differences play a role in the responses given.

“it’s not about a specific scenario or accident, it’s about the overall principle that an algorithm has to use to decide relative risk”

The National Highway Traffic Safety Administration acknowledged in a September report that self-driving cars could favor certain decisions over others even if they aren’t programmed explicitly to do so.

Self-driving cars will rely on machine learning, a branch of artificial intelligence that allows computers, or in this case cars, to learn over time. Since cars will learn how to adapt to the driving environment on their own, they could learn to favor certain outcomes.

“In the long run, I think something has to be done. There has to be some sort of guideline that’s a bit more specific, that’s the only way to obtain the trust of the public,” he said.

“Even in instances in which no explicit ethical rule or preference is intended, the programming of an HAV may establish an implicit or inherent decision rule with significant ethical consequences,” NHTSA wrote in the report, adding that manufacturers must work with regulators to address these situations.

Rahwan said programming for specific outcomes isn’t the right approach, but thinks companies should be doing more to let the public know that they are considering the ethics of driverless vehicles.

Source: Business Insider

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Leave a Reply