Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI
As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.
“We want to make these systems as ethical as possible and free from unseen biases.”
In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.
“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.
One of the key problems with artificial intelligence is that it is often invisibly coded with human biases.
“We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.””
Source: The Gaurdian