Tech Giants Grapple with the Ethical Concerns Raised by the #AI Boom

“We’re here at an inflection point for AI. We have an ethical imperative to harness AI to protect and preserve over time.” Eric Horvitz, managing director of Microsoft Research

2017 EmTech panel discussion

One shared concern was that recent advances are leading companies to put software in positions with very direct control over humans—for example in health care.

Francesca Rossi, a researcher at IBM, gave the example of a machine providing assistance or companionship to elderly people. “This robot will have to follow cultural norms that are culture-specific and task-specific,” she said. “[And] if you were to deploy in the U.S. or Japan, that behavior would have to be very different.”

In the past year, many efforts to research the ethical challenges of machine learning and AI have sprung up in academia and industry. The University of California, Berkeley; Harvard; and the Universities of Oxford and Cambridge have all started programs or institutes to work on ethics and safety in AI. In 2016, Amazon, Microsoft, Google, IBM, and Facebook jointly founded a nonprofit called Partnership on AI to work on the problem (Apple joined in January).

Companies are also taking individual action to build safeguards around their technology.

  • Gupta highlighted research at Google that is testing ways to correct biased machine-learning models, or prevent them from becoming skewed in the first place.
  • Horvitz described Microsoft’s internal ethics board for AI, dubbed AETHER, which considers things like new decision algorithms developed for the company’s in-cloud services. Although currently populated with Microsoft employees, in future the company hopes to add outside voices as well.
  • Google has started its own AI ethics board.

Technology Review