Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The Ethics Of AI Fulfilling Our Desires Vs Saving Us From Ourselves

What happens as machines are called upon to make ever more complex and important decisions on our behalf?

Big data

A display at the Big Bang Data exhibition at Somerset House highlighting the data explosion that’s radically transforming our lives. (Peter Macdiarmid/Getty Images for Somerset House)

Driverless cars are among the early intelligent systems being asked to make life or death decisions. While current vehicles perform mostly routine tasks like basic steering and collision avoidance, the new generation of fully autonomous cars being test driven pose unique ethical challenges.

For example, “should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?”

Alternatively, should a car “swerve onto a crowded sidewalk to avoid being rear-ended by a speeding truck or stay put and place the driver in mortal danger?”

On a more mundane level, driverless cars have already faced safety questions for strictly obeying traffic laws, creating a safety hazard as the surrounding traffic goes substantially faster.

Digital assistants and our health

Imagine for a moment the digital assistant that processes a note from our doctor warning us about the results of our latest medical checkup and that we need to lose weight and stay away from certain foods. At the same time, the assistant sees from our connected health devices that we’re not exercising much anymore and that we’ve been consuming a lot of junk food lately and actually gained three pounds last week and two pounds already this week. Now, it is quitting time on Friday afternoon and the assistant knows that every Friday night we stop by our local store for a 12 pack of donuts on the way home. What should that assistant do?

Should our digital assistant politely suggest we skip the donuts this week? Should it warn us in graphic detail about the health complications we will likely face down the road if we buy those donuts tonight? Should it go as far as to threaten to lock us out of our favorite mobile games on our phone or withhold our email or some other features for the next few days as punishment if we buy those donuts? Should it quietly send a note to our doctor telling her we bought donuts and asking for advice? Or, should it go as far as to instruct the credit card company to decline the transaction to stop us from buying the donuts?

The Cultural Challenge

Moreover, how should algorithms handle the cultural differences that are inherent to such value decisions? Should a personal assistant of someone living in Saudi Arabia who expresses interest in anti-government protest movements discourage further interest in the topic? Should the assistant of someone living in Thailand censor the person’s communications to edit out criticism of government officials to protect the person from reprisals?

Should an assistant that determines its user is depressed try to cheer that person up by masking negative news and deluging him with the most positive news it can find to try to improve his emotional state? What happens when those decisions are complicated by the desires of advertisers that pay for a particular outcome?

As artificial intelligence develops at an exponential rate, what are the value systems and ethics with which we should imbue our new digital servants?

When algorithms start giving us orders, should they fulfill our innermost desires or should they save us from ourselves?

This is the future of AI.

Source: Forbes

Read more on AI ethics on our post: How To Teach Robots Right and Wrong

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail