Machine prejudice from deep learning is infecting AI

This study by Aylin Caliskan-Islam1 , Joanna J. Bryson1,2, and Arvind Narayanan

Message from the bloggers of this post: This research paper is a bit deep, but it is very important in the discussion of socializing AI as it identifies what the authors call “machine prejudice” as an inevitable outcome from deep learning. Here we have pulled a few excerpts. To download the entire research paper click on the link below.

Abstract

Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

… Human learning is also a form of computation. Therefore our finding that data derived from human culture will deliver biases and prejudice have implications for the human sciences as well.

We argue that prejudice must be addressed as a component of any intelligent system learning from our culture. It cannot be entirely eliminated from the system, but rather must be compensated for.

Challenges in addressing bias
Redresses such as transparent development of AI technology and improving diversity and ethical training of developers, while useful, do little to address the kind of prejudicial bias we expose here. Unfortunately, our work points to several additional reasons why addressing bias in machine learning will be harder than one might expect. …

Awareness is better than blindness
… However, where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …

Study title: Semantics derived automatically from language corpora necessarily contain human biases

Source: Princeton University and University of Bath (click to download)

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail