How to Make A.I. That’s Good for People

Credit Elisa Macellari

For a field that was not well known outside of academia a decade ago, artificial intelligence has grown dizzyingly fast.

Tech companies from Silicon Valley to Beijing are betting everything on it, venture capitalists are pouring billions into research and development, and start-ups are being created on what seems like a daily basis. If our era is the next Industrial Revolution, as many claim, A.I. is surely one of its driving forces.

I worry, however, that enthusiasm for A.I. is preventing us from reckoning with its looming effects on society. Despite its name, there is nothing “artificial” about this technology — it is made by humans, intended to behave like humans and affects humans. So if we want it to play a positive role in tomorrow’s world, it must be guided by human concerns.

I call this approach “human-centered A.I.” It consists of three goals that can help responsibly guide the development of intelligent machines.

  • First, A.I. needs to reflect more of the depth that characterizes our own intelligence.
  • the second goal of human-centered A.I.: enhancing us, not replacing us.
  • the third goal of human-centered A.I.: ensuring that the development of this technology is guided, at each step, by concern for its effect on humans.

No technology is more reflective of its creators than A.I. It has been said that there are no “machine” values at all, in fact; machine values are human values.

A human-centered approach to A.I. means these machines don’t have to be our competitors, but partners in securing our well-being. However autonomous our technology becomes, its impact on the world — for better or worse — will always be our responsibility.

Fei-Fei Li is a professor of computer science at Stanford, where she directs the Stanford Artificial Intelligence Lab, and the chief scientist for A.I. research at Google Cloud.

Source: NYT

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI poses one of the “greatest tests of leadership for our time”

                                                           Getty Images

But it is a test that I am confident we can meet”
Thereas May, Prime Minister UK

The prime minister is to say she wants the UK to lead the world in deciding how artificial intelligence can be deployed in a safe and ethical manner.

Theresa May will say at the World Economic Forum in Davos that a new advisory body, previously announced in the Autumn Budget, will co-ordinate efforts with other countries.

In addition, she will confirm that the UK will join the Davos forum’s own council on artificial intelligence.

But others may have stronger claims.

Earlier this week, Google picked France as the base for a new research centre dedicated to exploring how AI can be applied to health and the environment.

Facebook also announced it was doubling the size of its existing AI lab in Paris, while software firm SAP committed itself to a 2bn euro ($2.5bn; £1.7bn) investment into the country that will include work on machine learning.

Meanwhile, a report released last month by the Eurasia Group consultancy suggested that the US and China are engaged in a “two-way race for AI dominance”.

It predicted Beijing would take the lead thanks to the “insurmountable” advantage of offering its companies more flexibility in how they use data about its citizens.

she is expected to say that the UK is recognised as first in the world for its preparedness to “bring artificial intelligence into government”.

Source: BBC

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

What’s Bigger Than Fire and Electricity? Artificial Intelligence – Google

Google CEO Sundar Pichai believes artificial intelligence could have “more profound” implications for humanity than electricity or fire, according to recent comments.

Pichai also warned that the development of artificial intelligence could pose as much risk as that of fire if its potential is not harnessed correctly.

“AI is one of the most important things humanity is working on” Pichai said in an interview with MSNBC and Recode

“My point is AI is really important, but we have to be concerned about it,” Pichai said. “It’s fair to be worried about it—I wouldn’t say we’re just being optimistic about it— we want to be thoughtful about it. AI holds the potential for some of the biggest advances we’re going to see.”

Source: Newsweek

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

In 2018 AI will gain a moral compass

The ethics of artificial intelligence must be central to its development

Janne Iivonen

Humanity faces a wide range of challenges that are characterised by extreme complexity

… the successful integration of AI technologies into our social and economic world creates its own challenges. They could either help overcome economic inequality or they could worsen it if the benefits are not distributed widely.

They could shine a light on damaging human biases and help society address them, or entrench patterns of discrimination and perpetuate them. Getting things right requires serious research into the social consequences of AI and the creation of partnerships to ensure it works for the public good.

This is why I predict the study of the ethics, safety and societal impact of AI is going to become one of the most pressing areas of enquiry over the coming year.

It won’t be easy: the technology sector often falls into reductionist ways of thinking, replacing complex value judgments with a focus on simple metrics that can be tracked and optimised over time.

There has already been valuable work done in this area. For example, there is an emerging consensus that it is the responsibility of those developing new technologies to help address the effects of inequality, injustice and bias. In 2018, we’re going to see many more groups start to address these issues.

Of course, it’s far simpler to count likes than to understand what it actually means to be liked and the effect this has on confidence or self-esteem.

Progress in this area also requires the creation of new mechanisms for decision-making and voicing that include the public directly. This would be a radical change for a sector that has often preferred to resolve problems unilaterally – or leave others to deal with them.

Mustafa Suleyman co-founder of DeepMind Technologies

We need to do the hard, practical and messy work of finding out what ethical AI really means. If we manage to get AI to work for people and the planet, then the effects could be transformational. Right now, there’s everything to play for.

Source: Wired 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

DeepMind’s new AI ethics unit

DeepMind made this announcement Oct 2017

Google-owned DeepMind has announced the formation of a major new AI research unit comprised of full-time staff and external advisors

DrAfter123/iStock

As we hand over more of our lives to artificial intelligence systems, keeping a firm grip on their ethical and societal impact is crucial.

DeepMind Ethics & Society (DMES), a unit comprised of both full-time DeepMind employees and external fellows, is the company’s latest attempt to scrutinise the societal impacts of the technologies it creates.

DMES will work alongside technologists within DeepMind and fund external research based on six areas: privacy transparency and fairness; economic impacts; governance and accountability; managing AI risk; AI morality and values; and how AI can address the world’s challenges.

Its aim, according to DeepMind, is twofold: to help technologists understand the ethical implications of their work and help society decide how AI can be beneficial.

“We want these systems in production to be our highest collective selves. We want them to be most respectful of human rights, we want them to be most respectful of all the equality and civil rights laws that have been so valiantly fought for over the last sixty years.” [Mustafa Suleyman]

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Jaron Lanier – the greatest tragedy in the history of computing and …

A few highlights from THE BUSINESS INSIDER INTERVIEW with Jaron

But that general principle — that we’re not treating people well enough with digital systems — still bothers me. I do still think that is very true.

Well, this is maybe the greatest tragedy in the history of computing, and it goes like this: there was a well-intentioned, sweet movement in the ‘80s to try to make everything online free. And it started with free software and then it was free music, free news, and other free services.

But, at the same time, it’s not like people were clamoring for the government to do it or some sort of socialist solution. If you say, well, we want to have entrepreneurship and capitalism, but we also want it to be free, those two things are somewhat in conflict, and there’s only one way to bridge that gap, and it’s through the advertising model.

And advertising became the model of online information, which is kind of crazy. But here’s the problem: if you start out with advertising, if you start out by saying what I’m going to do is place an ad for a car or whatever, gradually, not because of any evil plan — just because they’re trying to make their algorithms work as well as possible and maximize their shareholders value and because computers are getting faster and faster and more effective algorithms —

what starts out as advertising morphs into behavior modification.

A second issue is that people who participate in a system of this time, since everything is free since it’s all being monetized, what reward can you get? Ultimately, this system creates assholes, because if being an asshole gets you attention, that’s exactly what you’re going to do. Because there’s a bias for negative emotions to work better in engagement, because the attention economy brings out the asshole in a lot of other people, the people who want to disrupt and destroy get a lot more efficiency for their spend than the people who might be trying to build up and preserve and improve.

Q: What do you think about programmers using consciously addicting techniques to keep people hooked to their products?

A: There’s a long and interesting history that goes back to the 19th century, with the science of Behaviorism that arose to study living things as though they were machines.

Behaviorists had this feeling that I think might be a little like this godlike feeling that overcomes some hackers these days, where they feel totally godlike as though they have the keys to everything and can control people


I think our responsibility as engineers is to engineer as well as possible, and to engineer as well as possible, you have to treat the thing you’re engineering as a product.

You can’t respect it in a deified way.

It goes in the reverse. We’ve been talking about the behaviorist approach to people, and manipulating people with addictive loops as we currently do with online systems.

In this case, you’re treating people as objects.

It’s the flipside of treating machines as people, as AI does. They go together. Both of them are mistakes

Source: Read the extensive interview at Business Insider



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Put Humans at the Center of AI

As the director of Stanford’s AI Lab and now as a chief scientist of Google Cloud, Fei-Fei Li is helping to spur the AI revolution. But it’s a revolution that needs to include more people. She spoke with MIT Technology Review senior editor Will Knight about why everyone benefits if we emphasize the human side of the technology.

Why did you join Google?

Researching cutting-edge AI is very satisfying and rewarding, but we’re seeing this great awakening, a great moment in history. For me it’s very important to think about AI’s impact in the world, and one of the most important missions is to democratize this technology. The cloud is this gigantic computing vehicle that delivers computing services to every single industry.

What have you learned so far?

We need to be much more human-centered.

If you look at where we are in AI, I would say it’s the great triumph of pattern recognition. It is very task-focused, it lacks contextual awareness, and it lacks the kind of flexible learning that humans have.

We also want to make technology that makes humans’ lives better, our world safer, our lives more productive and better. All this requires a layer of human-level communication and collaboration.

When you are making a technology this pervasive and this important for humanity, you want it to carry the values of the entire humanity, and serve the needs of the entire humanity.

If the developers of this technology do not represent all walks of life, it is very likely that this will be a biased technology. I say this as a technologist, a researcher, and a mother. And we need to be speaking about this clearly and loudly.

Source: MIT Technology Review



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Facebook and Google promote Las Vegas-shooting hoaxes

The missteps underscore how misinformation continues to undermine the credibility of Silicon Valley’s biggest companies.

Accuracy matters in the moments after a tragedy. Facts can help catch the suspects, save lives and prevent a panic.

But in the aftermath of the deadly mass shooting in Las Vegas on Sunday, the world’s two biggest gateways for information, Google and Facebook, did nothing to quell criticism that they amplify fake news when they steer readers toward hoaxes and misinformation gathering momentum on fringe sites.

Google posted under its “top stories” conspiracy-laden links from 4chan — home to some of the internet’s most ardent trolls. It also promoted a now-deleted story from Gateway Pundit and served videos on YouTube of dubious origin.

The posts all had something in common: They identified the wrong assailant.

Facebook’s Crisis Response page, a hub for users to stay informed and mobilize during disasters, perpetuated the same rumors by linking to sites such as Alt-Right News and End Time Headlines, according to Fast Company.

The platforms have immense influence on what gets seen and read. More than two-thirds of Americans report getting at least some of their news from social media, according to the Pew Research Center. A separate global study published by Edelman last year found that more people trusted search engines (63%) for news and information than traditional media such as newspapers and television (58%).

Still, skepticism abounds that the companies beholden to shareholders are equipped to protect the public from misinformation and recognize the threat their platforms pose to democratic societies.

Source: LA Times



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Sundar Pichai says the future of Google is AI. But can he fix the algorithm?

I was asking in the context of the aftermath of the 2016 election and the misinformation that companies like Facebook, Twitter, and Google were found to have spread.

“I view it as a big responsibility to get it right,” he says. “I think we’ll be able to do these things better over time. But I think the answer to your question, the short answer and the only answer, is we feel huge responsibility.”

But it’s worth questioning whether Google’s systems are making the rightdecisions, even as they make some decisions much easier.

People are already skittish about how much Google knows about them, and they are unclear on how to manage their privacy settings. Pichai thinks that’s another one of those problems that AI could fix, “heuristically.”

“Down the line, the system can be much more sophisticated about understanding what is sensitive for users, because it understands context better,” Pichai says. “[It should be] treating health-related information very differently from looking for restaurants to eat with friends.” Instead of asking users to sift through a “giant list of checkboxes,” a user interface driven by AI could make it easier to manage.

Of course, what’s good for users versus what’s good for Google versus what’s good for the other business that rely on Google’s data is a tricky question. And it’s one that AI alone can’t solve. Google is responsible for those choices, whether they’re made by people or robots.

The amount of scrutiny companies like Facebook and Google — and Google’s YouTube division — face over presenting inaccurate or outright manipulative information is growing every day, and for good reason.

Pichai thinks that Google’s basic approach for search can also be used for surfacing good, trustworthy content in the feed. “We can still use the same core principles we use in ranking around authoritativeness, trust, reputation.

What he’s less sure about, however, is what to do beyond the realm of factual information — with genuine opinion: “I think the issue we all grapple with is how do you deal with the areas where people don’t agree or the subject areas get tougher?”

When it comes to presenting opinions on its feed, Pichai wonders if Google could “bring a better perspective, rather than just ranking alone. … Those are early areas of exploration for us, but I think we could do better there.”

Source: The Verge



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Intelligent Machines Forget Killer Robots—Bias Is the Real AI Danger

John Giannandrea – GETTY

John Giannandrea, who leads AI at Google, is worried about intelligent systems learning human prejudices.

… concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute.

The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.

Karrie Karahalios, a professor of computer science at the University of Illinois, presented research highlighting how tricky it can be to spot bias in even the most commonplace algorithms. Karahalios showed that users don’t generally understand how Facebook filters the posts shown in their news feed. While this might seem innocuous, it is a neat illustration of how difficult it is to interrogate an algorithm.

Facebook’s news feed algorithm can certainly shape the public perception of social interactions and even major news events. Other algorithms may already be subtly distorting the kinds of medical care a person receives, or how they get treated in the criminal justice system.

This is surely a lot more important than killer robots, at least for now.

Source: MIT Technology Review



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence pioneer says throw it all away and start again

Geoffrey Hinton harbors doubts about AI’s current workhorse. (Johnny Guatto / University of Toronto)

In 1986, Geoffrey Hinton co-authored a paper that, three decades later, is central to the explosion of artificial intelligence.

But Hinton says his breakthrough method should be dispensed with, and a new path to AI found.

… he is now “deeply suspicious” of back-propagation, the workhorse method that underlies most of the advances we are seeing in the AI field today, including the capacity to sort through photos and talk to Siri.

“My view is throw it all away and start again”

Hinton said that, to push materially ahead, entirely new methods will probably have to be invented. “Max Planck said, ‘Science progresses one funeral at a time.’ The future depends on some graduate student who is deeply suspicious of everything I have said.”

Hinton suggested that, to get to where neural networks are able to become intelligent on their own, what is known as “unsupervised learning,” “I suspect that means getting rid of back-propagation.”

“I don’t think it’s how the brain works,” he said. “We clearly don’t need all the labeled data.”

Source: Axios

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Behind the Google diversity memo furor is fear of Google’s vast opaque power

Fear of opaque power of Google in particular, and Silicon Valley in general, wields over our lives.

If Google — and the tech world more generally — is sexist, or in the grips of a totalitarian cult of political correctness, or a secret hotbed of alt-right reactionaries, the consequences would be profound.

Google wields a monopoly over search, one of the central technologies of our age, and, alongside Facebook, dominates the internet advertising market, making it a powerful driver of both consumer opinion and the media landscape. 

It shapes the world in which we live in ways both obvious and opaque.

This is why trust matters so much in tech. It’s why Google, to attain its current status in society, had to promise, again and again, that it wouldn’t be evil. 

Compounding the problem is that the tech industry’s point of view is embedded deep in the product, not announced on the packaging. Its biases are quietly built into algorithms, reflected in platform rules, expressed in code few of us can understand and fewer of us will ever read.

But what if it actually is evil? Or what if it’s not evil but just immature, unreflective, and uncompassionate? And what if that’s the culture that designs the digital services the rest of us have to use?

The technology industry’s power is vast, and the way that power is expressed is opaque, so the only real assurance you can have that your interests and needs are being considered is to be in the room when the decisions are made and the code is written. But tech as an industry is unrepresentative of the people it serves and unaccountable in the way it serves them, and so there’s very little confidence among any group that the people in the room are the right ones.

Source: Vox (read the entire article by Ezra Klein)



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail