Welcoming the guidelines at the Social Robotics and AI conference in Oxford, Alan Winfield, a professor of robotics at the University of the West of England, said they represented “the first step towards embedding ethical values into robotics and AI”.
Winfield said: “Deep learning systems are quite literally using the whole of the data on the internet to train on, and the problem is that that data is biased. These systems tend to favour white middle-aged men, which is clearly a disaster. All the human prejudices tend to be absorbed, or there’s a danger of that.”
The guidance even hints at the prospect of sexist or racist robots, warning against “lack of respect for cultural diversity or pluralism”.
“This is already showing up in police technologies,” said Sharkey, adding that technologies designed to flag up suspicious people to be stopped at airports had already proved to be a form of racial profiling.
Source: The Guardian