Are there some things we just shouldn’t build? #AI

The prestigious Neural Information Processing Systems conference have a new topic on their agenda. Alongside the usual … concern about AI’s power.

Kate Crawford … urged attendees to start considering, and finding ways to mitigate, accidental or intentional harms caused by their creations. “

“Amongst the very real excitement about what we can do there are also some really concerning problems arising”

“In domains like medicine we can’t have these models just be a black box where something goes in and you get something out but don’t know why,” says Maithra Raghu, a machine-learning researcher at Google. On Monday, she presented open-source software developed with colleagues that can reveal what a machine-learning program is paying attention to in data. It may ultimately allow a doctor to see what part of a scan or patient history led an AI assistant to make a particular diagnosis.

“If you have a diversity of perspectives and background you might be more likely to check for bias against different groups” Hanna Wallach  a researcher at Microsoft

Others in Long Beach hope to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. A parallel technical conference called Women in Machine Learning has run alongside NIPS for a decade. This Friday sees the first Black in AI workshop, intended to create a dedicated space for people of color in the field to present their work.

Towards the end of her talk Tuesday, Crawford suggested civil disobedience could shape the uses of AI. She talked of French engineer Rene Carmille, who sabotaged tabulating machines used by the Nazis to track French Jews. And she told today’s AI engineers to consider the lines they don’t want their technology to cross. “Are there some things we just shouldn’t build?” she asked.

Source: Wired



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail