What happens as machines are called upon to make ever more complex and important decisions on our behalf?
Driverless cars are among the early intelligent systems being asked to make life or death decisions. While current vehicles perform mostly routine tasks like basic steering and collision avoidance, the new generation of fully autonomous cars being test driven pose unique ethical challenges.
For example, “should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?”
Alternatively, should a car “swerve onto a crowded sidewalk to avoid being rear-ended by a speeding truck or stay put and place the driver in mortal danger?”
On a more mundane level, driverless cars have already faced safety questions for strictly obeying traffic laws, creating a safety hazard as the surrounding traffic goes substantially faster.
Digital assistants and our health
Imagine for a moment the digital assistant that processes a note from our doctor warning us about the results of our latest medical checkup and that we need to lose weight and stay away from certain foods. At the same time, the assistant sees from our connected health devices that we’re not exercising much anymore and that we’ve been consuming a lot of junk food lately and actually gained three pounds last week and two pounds already this week. Now, it is quitting time on Friday afternoon and the assistant knows that every Friday night we stop by our local store for a 12 pack of donuts on the way home. What should that assistant do?
Should our digital assistant politely suggest we skip the donuts this week? Should it warn us in graphic detail about the health complications we will likely face down the road if we buy those donuts tonight? Should it go as far as to threaten to lock us out of our favorite mobile games on our phone or withhold our email or some other features for the next few days as punishment if we buy those donuts? Should it quietly send a note to our doctor telling her we bought donuts and asking for advice? Or, should it go as far as to instruct the credit card company to decline the transaction to stop us from buying the donuts?
The Cultural Challenge
Moreover, how should algorithms handle the cultural differences that are inherent to such value decisions? Should a personal assistant of someone living in Saudi Arabia who expresses interest in anti-government protest movements discourage further interest in the topic? Should the assistant of someone living in Thailand censor the person’s communications to edit out criticism of government officials to protect the person from reprisals?
Should an assistant that determines its user is depressed try to cheer that person up by masking negative news and deluging him with the most positive news it can find to try to improve his emotional state? What happens when those decisions are complicated by the desires of advertisers that pay for a particular outcome?
As artificial intelligence develops at an exponential rate, what are the value systems and ethics with which we should imbue our new digital servants?
When algorithms start giving us orders, should they fulfill our innermost desires or should they save us from ourselves?
This is the future of AI.
Read more on AI ethics on our post: How To Teach Robots Right and Wrong