For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast.
Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.
I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.
I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.
- First, A.I. needs to reflect more of the depth that characterizes our own intelligence.
- the second goal of human-centered A.I.: enhancing us, not replacing us.
- the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans.
No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values are human values.
A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility.
Fei-Fei Li is a professor of computer science at Stanford, where she directs the Stanford Artificial Intelligence Lab, and the chief scientist for A.I. research at Google Cloud.