Grandma? Now you can see the bias in the data …

“Just type the word grandma in your favorite search engine image search and you will see the bias in the data, in the picture that is returned  … you will see the race bias.” — Fei-Fei Li, Professor of Computer Science, Stanford University, speaking at the White House Frontiers Conference

Google image search for Grandma 

google-grandmas

Bing image search for Grandma

grandma-bing

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

When artificial intelligence judges a beauty contest, white people win

Some of the beauty contest winners judged by an AI

Some of the beauty contest winners judged by an AI

As humans cede more and more control to algorithms, whether in the courtroom or on social media, the way they are built becomes increasingly important. The foundation of machine learning is data gathered by humans, and without careful consideration, the machines learn the same biases of their creators.

An online beauty contest called Beauty.ai, run by Youth Laboratories solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age. However, race seemed to play a larger role than intended; of the 44 winners, 36 were white.

“So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.” – Kate Crawford

It happens to be that color does matter in machine vision, Alex Zhavoronkov, chief science officer of Beauty.ai, told Motherboard. “And for some population groups the data sets are lacking an adequate number of samples to be able to train the deep neural networks.”

“If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing non-white faces, writes Kate Crawford, principal researcher at Microsoft Research New York City, in a New York Times op-ed.

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence is quickly becoming as biased as we are

ai-bias

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:women-professional-hair-styles

 … you could probably pat Google on the back and say ‘job well done.’ That is, until you try searching for ‘unprofessional women’s hairstyles’ and find this:

women-unprofessional-hair-styles

It’s not new. In fact, Boing Boing spotted this back in April.

What’s concerning though, is just how much of our lives we’re on the verge of handing over to artificial intelligence. With today’s deep learning algorithms, the ‘training’ of this AI is often as much a product of our collective hive mind as it is programming.

Artificial intelligence, in fact, is using our collective thoughts to train the next generation of automation technologies. All the while, it’s picking up our biases and making them more visible than ever.

This is just the beginning … If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same principles used to train the previous examples. In the future, our neighborhoods will see an increase or decrease in police presence based on data that we already know is biased.

Source: The Next Web

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail