Artificial Intelligence’s White Guy Problem

nyt-white-guy-problem

Credit Bianca Bagnarelli

Warnings by luminaries like Elon Musk and Nick Bostrom about “the singularity” — when machines become smarter than humans — have attracted millions of dollars and spawned a multitude of conferences.

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems.

Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.

Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

Like all technologies before it, artificial intelligence will reflect the values of its creators.

Source: New York Times – Kate Crawford is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and A.I.

test

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI is one of top 5 tools humanity has ever had

 A few highlights from AI panel at the White House Frontiers Conference

On the impact of AI

Andrew McAfee (MIT):

white-house-frontiers-ai-panel

To view video, click on pic, scroll down the page to Live Stream and click to start the video. It may take a min and then go to the time you want to watch.

(Begins @ 2:40:34)

We are at an inflection point … I think the development of these kinds of [AI] tools are going to rank among probably the top 5 tools humanity has ever had to take better care of each other and to tread more lightly on the planet … top 5 in our history. Like the book, maybe, the steam engine, maybe, written language — I might put the Internet there. We’ve all got our pet lists of the biggest inventions ever. AI needs to be on the very, very, short list.

On bias in AI

Fei-Fei Li, Professor of Computer Science, Stanford University:

(Begins @ 3:14:57)

Research repeatedly has shown that when people work in diverse groups there is increased creativity and innovation.

And interestingly, it is harder to work as a diverse group. I’m sure everybody here in the audience have had that experience. We have to listen to each other more. We have to understand the perspective more. But that also correlates well with innovation and creativity. … If we don’t have the inclusion of [diverse] people to think about the problems and the algorithms in AI, we might not only being missing the innovation boat we might actually create bias and create unfairness that are going to be detrimental to our society … 

What I have been advocating at Stanford, and with my colleagues in the community is, let’s bring the humanistic mission statement into the field of AI. Because AI is fundamentally an applied technology that’s going to serve our society. Humanistic AI not only raises the awareness and the importance of our technology, it’s actually a really, really important way to attract diverse students and technologists and innovators to participate in the technology of AI.

There has been a lot of research done to show that people with diverse background put more emphasis on humanistic mission in their work and in their life. So, if in our education, in our research, if we can accentuate or bring out this humanistic message of this technology, we are more likely to invite the diversity of students and young technologists to join us.

On lack of minorities in AI

Andrew Moore Dean, School of Computer Science, Carnegie Mellon University:

(Begins @ 3:19:10)

I so strongly applaud what you [Fei-Fei Li] are describing here because I think we are engaged in a fight here for how the 21st century pans out in terms of who’s running the world … 

The nightmare, the silly, silly thing we could do … would be if … the middle of the century is built by a bunch of non-minority guys from suburban moderately wealthy United States instead of the full population of the United States.

Source: Frontiers Conference
Click on the video that says Live Stream (event will start shortly)
it may take a minute to load

(Update 02/24/17: The original timelines listed above may be different when revisiting this video.)

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Deep Learning is making AI prejudiced

Bloggers note: The authors of this research paper show what they refer to as “machine prejudice” and how it derives so fundamentally from human culture. 

“Concerns about machine prejudice are now coming to the fore–concerns that our historic biases and prejudices are being reified in machines,” they write. “Documented cases of automated prejudice range from online advertising (Sweeney, 2013) to criminal sentencing (Angwin et al., 2016).”

Following are a few excerpts: 

machine-prejudiceAbstract

“Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

Discussion

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.

Awareness is better than blindness

“… where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

“Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …”

Click here to download the pdf of the report
Semantics derived automatically from language corpora necessarily contain human biases
Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan

1 Princeton University
2 University of Bath
Draft date August 31, 2016.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Grandma? Now you can see the bias in the data …

“Just type the word grandma in your favorite search engine image search and you will see the bias in the data, in the picture that is returned  … you will see the race bias.” — Fei-Fei Li, Professor of Computer Science, Stanford University, speaking at the White House Frontiers Conference

Google image search for Grandma 

google-grandmas

Bing image search for Grandma

grandma-bing

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Civil Rights and Big Data

big-data-whitehouse-reportBlogger’s note: We’ve posted several articles on the bias and prejudice inherent in big data, which with machine learning results in “machine prejudice,” all of which impacts humans when they interact with intelligent agents. 

Apparently, as far back as May 2014, the Executive Office of the President started issuing reports on the potential in “Algorithmic Systems” for “encoding discrimination in automated decisions”. The most recent report of May 2016 addressed two additional challenges:

1) Challenges relating to data used as inputs to an algorithm;

2) Challenges related to the inner workings of the algorithm itself.

Here are two excerpts:

The Obama Administration’s Big Data Working Group released reports on May 1, 2014 and February 5, 2015. These reports surveyed the use of data in the public and private sectors and analyzed opportunities for technological innovation as well as privacy challenges. One important social justice concern the 2014 report highlighted was “the potential of encoding discrimination in automated decisions”—that is, that discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.”

To avoid exacerbating biases by encoding them into technological systems, we need to develop a principle of “equal opportunity by design”—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.

Download the report here: Whitehouse.gov

References:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

http://www.frontiersconference.org/

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

When artificial intelligence judges a beauty contest, white people win

Some of the beauty contest winners judged by an AI

Some of the beauty contest winners judged by an AI

As humans cede more and more control to algorithms, whether in the courtroom or on social media, the way they are built becomes increasingly important. The foundation of machine learning is data gathered by humans, and without careful consideration, the machines learn the same biases of their creators.

An online beauty contest called Beauty.ai, run by Youth Laboratories solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age. However, race seemed to play a larger role than intended; of the 44 winners, 36 were white.

“So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.” – Kate Crawford

It happens to be that color does matter in machine vision, Alex Zhavoronkov, chief science officer of Beauty.ai, told Motherboard. “And for some population groups the data sets are lacking an adequate number of samples to be able to train the deep neural networks.”

“If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing non-white faces, writes Kate Crawford, principal researcher at Microsoft Research New York City, in a New York Times op-ed.

Source: Quartz

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

China has now eclipsed U.S. in AI research

As more industries and policymakers awaken to the benefits of machine learning, two countries appear to be pulling away in the research race. The results will probably have significant implications for the future of AI.

articles-on-deep-learning

What’s striking about it is that although the United States was an early leader on deep-learning research, China has effectively eclipsed it in terms of the number of papers published annually on the subject. The rate of increase is remarkably steep, reflecting how quickly China’s research priorities have shifted.

quality-deep-learning-researchThe quality of China’s research is also striking. The chart below narrows the research to include only those papers that were cited at least once by other researchers, an indication that the papers were influential in the field.

Compared with other countries, the United States and China are spending tremendous research attention on deep learning. But, according to the White House, the United States is not investing nearly enough in basic research.

“Current levels of R&D spending are half to one-quarter of the level of R&D investment that would produce the optimal level of economic growth,”
a companion report published this week by the Obama administration finds.

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The big reveal: AI’s deep learning is biased

A comment from the writers of this blog: 

The chart below visualizes 175 cognitive biases that humans have, meticulously organized by Buster Benson and algorithmically designed by John Manoogian III.

Many of these biases are implicit bias which refers to the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner. These biases, embedded in our language, are now getting embedded in big data. They are being absorbed by deep learning and are now influencing Artificial Intelligence. Going forward, this will impact how AI interacts with humans.

We have featured many other posts on this blog recently about this issue—how AI is demonstrating bias—and we are adding this “cheat sheet” to further illustrate the kinds of human bias that AI is learning. 

Illustration content Buster Benson, “diagrammatic poster remix” by John Manoogian III

Source: Buster Benson blog

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence is quickly becoming as biased as we are

ai-bias

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:women-professional-hair-styles

 … you could probably pat Google on the back and say ‘job well done.’ That is, until you try searching for ‘unprofessional women’s hairstyles’ and find this:

women-unprofessional-hair-styles

It’s not new. In fact, Boing Boing spotted this back in April.

What’s concerning though, is just how much of our lives we’re on the verge of handing over to artificial intelligence. With today’s deep learning algorithms, the ‘training’ of this AI is often as much a product of our collective hive mind as it is programming.

Artificial intelligence, in fact, is using our collective thoughts to train the next generation of automation technologies. All the while, it’s picking up our biases and making them more visible than ever.

This is just the beginning … If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same principles used to train the previous examples. In the future, our neighborhoods will see an increase or decrease in police presence based on data that we already know is biased.

Source: The Next Web

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail