You cannot be a celebrant of originality if you are not ALSO working to liberate the ideas of those typically underseen

Adam Galinsky of Columbia University has researched that “power and status act as self-reinforcing loops”, allowing those who have power and status to have their ideas heard and those without power to be ignored and silenced.

It’s not that the original idea is weighed and deemed unworthy but that the person bringing that new and unusual idea is deemed unworthy of being listened to.

When you are hiring to bring in new ideas or designing hackathons within your firm to unlock innovation levels within your firm, know this: you cannot do innovation and access original ideas without addressing the deep and pervasive role of bias.

Either you are doing something to explicitly dismantle the structural ways in which we limit who is allowed to have ideas, to unlock their capacity… or you are allowing the same old people to keep doing the same old things, perpetuating the status quo.

By your actions, you’re picking a side.

Source: Nilofer Merchant



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence Will Be as Biased and Prejudiced as Its Human Creators

ai-appleThe optimism around modern technology lies in part in the belief that it’s a democratizing force—one that isn’t bound by the petty biases and prejudices that humans have learned over time. But for artificial intelligence, that’s a false hope, according to new research, and the reason is boneheadedly simple: Just as we learn our biases from the world around us, AI will learn its biases from us.

Source: Pacific Standard

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The big reveal: AI’s deep learning is biased

A comment from the writers of this blog: 

The chart below visualizes 175 cognitive biases that humans have, meticulously organized by Buster Benson and algorithmically designed by John Manoogian III.

Many of these biases are implicit bias which refers to the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner. These biases, embedded in our language, are now getting embedded in big data. They are being absorbed by deep learning and are now influencing Artificial Intelligence. Going forward, this will impact how AI interacts with humans.

We have featured many other posts on this blog recently about this issue—how AI is demonstrating bias—and we are adding this “cheat sheet” to further illustrate the kinds of human bias that AI is learning. 

Illustration content Buster Benson, “diagrammatic poster remix” by John Manoogian III

Source: Buster Benson blog

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail