John Giannandrea, who leads AI at Google, is worried about intelligent systems learning human prejudices.
… concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute.
The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased
The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.
Karrie Karahalios, a professor of computer science at the University of Illinois, presented research highlighting how tricky it can be to spot bias in even the most commonplace algorithms. Karahalios showed that users don’t generally understand how Facebook filters the posts shown in their news feed. While this might seem innocuous, it is a neat illustration of how difficult it is to interrogate an algorithm.
Facebook’s news feed algorithm can certainly shape the public perception of social interactions and even major news events. Other algorithms may already be subtly distorting the kinds of medical care a person receives, or how they get treated in the criminal justice system.
This is surely a lot more important than killer robots, at least for now.
Source: MIT Technology Review