Two new research groups want to ensure that AI benefits humans, not harms them.
Whether you believe the buzz about artificial intelligence is merely hype or that the technology represents the future, something undeniable is happening. Researchers are more easily solving decades-long problems like teaching computers to recognize images and understanding speech at a rapid space, and companies like Google and Facebook are pouring millions of dollars into their own related projects.
What could possibly go wrong?
For one thing, advances in artificial intelligence could eventually lead to unforeseen consequences. University of California at Berkeley professor Stuart Russell is concerned that powerful computers powered by artificial intelligence, or AI, could unintentionally create problems that humans cannot predict.
Consider an AI system that’s designed to make the best stock trades but has no moral code to keep it from doing something illegal. That’s why Russell and UC Berkeley debuted a new AI research center this week to address these potential problems and build AI systems that consider moral issues. Tech giants Alphabet, Facebook, IBM, and Microsoft are also teaming up to focus on the ethics challenges.
Similarly, Ilya Sutskever, the research director of the Elon Musk-backed OpenAI nonprofit, is working on AI projects independent from giant corporations. He and OpenAI believe those big companies could ignore AI’s potential benefit for humanity and instead focus the technology entirely on making money.
Russell compares the current state of AI to the rise of nuclear energy during the 1950s and 1960s, when proponents believed that “anyone who disagreed with them was irrational or crazy” for wanting robust safety measures that could hinder innovation and adoption. Sutskever says some AI proponents fail to consider the potential dangers or unintended consequences of the technology—just like some people were unable to grasp that widespread use of cars could lead to global warming.