The Real Reason Tech Struggles With Algorithmic Bias

Yaël Eisenstat is a former CIA officer, national security adviser to vice president Biden, and CSR leader at ExxonMobil. She was elections integrity operations head at Facebook from June to November 2018.

Has anyone stopped to ask whether the humans that feed the machines really understand what bias means?

Over more than a decade working as a CIA officer, I went through months of training and routine retraining on structural methods for checking assumptions and understanding cognitive biases.

It is one of the most important skills for an intelligence officer to develop. Analysts and operatives must hone the ability to test assumptions and do the uncomfortable and often time-consuming work of rigorously evaluating one’s own biases when analyzing events. They must also examine the biases of those providing information—assets, foreign governments, media, adversaries—to collectors.

While tech companies often have mandatory “managing bias” training to help with diversity and inclusion issues, I did not see any such training on the field of cognitive bias and decision making, particularly as it relates to how products and processes are built and secured.

I believe that many of my former coworkers at Facebook fundamentally want to make the world a better place. I have no doubt that they feel they are building products that have been tested and analyzed to ensure they are not perpetuating the nastiest biases.

But the company has created its own sort of insular bubble in which its employees’ perception of the world is the product of a number of biases that are engrained within the Silicon Valley tech and innovation scene

Becoming overly reliant on data—which in itself is a product of availability bias—is a huge part of the problem.

In my time at Facebook, I was frustrated by the immediate jump to “data” as the solution to all questions. That impulse often overshadowed necessary critical thinking to ensure that the information provided wasn’t tainted by issues of confirmation, pattern, or other cognitive biases.  

To counter algorithmic, machine, and AI bias, human intelligence must be incorporated into solutions, as opposed to an over-reliance on so-called “pure” data.

Attempting to avoid bias without a clear understanding of what that truly means will inevitably fail.

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail