What Makes an Artificial Intelligence Racist and Sexist – #AI

AI can analyze data more quickly and accurately than humans, but it can also inherit our biases. To learn, it needs massive quantities of data, and the easiest way to find that data is to feed it text from the internet. But the internet contains some extremely biased language.

A Stanford study found that an internet-trained AI associated stereotypically white names with positive words like “love,” and black names with negative words like “failure” and “cancer.”

The scariest thing about this bias is how invisibly it can take over. According to (Rob Speer, Chief Science Office, Luminoso) “some people [will] go through life not knowing why they get fewer opportunities, fewer job offers, more interactions with the police or the TSA…”

Of course, he points out, racism and sexism are baked into society, and promising technological advances, even when explicitly meant to counteract them, often amplify them. There’s no such thing as an objective tool built on subjective data.

So AI developers bear a huge responsibility to find the flaws in their AI and address them.

“There’s no AI that works like the human brain,” he says. “To counter the hype, I hope we can stop talking about brains and start talking about what’s actually going on: it’s mostly statistics, databases, and pattern recognition. Which shouldn’t make it any less interesting.”

Source: Lifehacker

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail