Sundar Pichai says the future of Google is AI. But can he fix the algorithm?

I was asking in the context of the aftermath of the 2016 election and the misinformation that companies like Facebook, Twitter, and Google were found to have spread.

“I view it as a big responsibility to get it right,” he says. “I think we’ll be able to do these things better over time. But I think the answer to your question, the short answer and the only answer, is we feel huge responsibility.”

But it’s worth questioning whether Google’s systems are making the rightdecisions, even as they make some decisions much easier.

People are already skittish about how much Google knows about them, and they are unclear on how to manage their privacy settings. Pichai thinks that’s another one of those problems that AI could fix, “heuristically.”

“Down the line, the system can be much more sophisticated about understanding what is sensitive for users, because it understands context better,” Pichai says. “[It should be] treating health-related information very differently from looking for restaurants to eat with friends.” Instead of asking users to sift through a “giant list of checkboxes,” a user interface driven by AI could make it easier to manage.

Of course, what’s good for users versus what’s good for Google versus what’s good for the other business that rely on Google’s data is a tricky question. And it’s one that AI alone can’t solve. Google is responsible for those choices, whether they’re made by people or robots.

The amount of scrutiny companies like Facebook and Google — and Google’s YouTube division — face over presenting inaccurate or outright manipulative information is growing every day, and for good reason.

Pichai thinks that Google’s basic approach for search can also be used for surfacing good, trustworthy content in the feed. “We can still use the same core principles we use in ranking around authoritativeness, trust, reputation.

What he’s less sure about, however, is what to do beyond the realm of factual information — with genuine opinion: “I think the issue we all grapple with is how do you deal with the areas where people don’t agree or the subject areas get tougher?”

When it comes to presenting opinions on its feed, Pichai wonders if Google could “bring a better perspective, rather than just ranking alone. … Those are early areas of exploration for us, but I think we could do better there.”

Source: The Verge



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Behind the Google diversity memo furor is fear of Google’s vast opaque power

Fear of opaque power of Google in particular, and Silicon Valley in general, wields over our lives.

If Google — and the tech world more generally — is sexist, or in the grips of a totalitarian cult of political correctness, or a secret hotbed of alt-right reactionaries, the consequences would be profound.

Google wields a monopoly over search, one of the central technologies of our age, and, alongside Facebook, dominates the internet advertising market, making it a powerful driver of both consumer opinion and the media landscape. 

It shapes the world in which we live in ways both obvious and opaque.

This is why trust matters so much in tech. It’s why Google, to attain its current status in society, had to promise, again and again, that it wouldn’t be evil. 

Compounding the problem is that the tech industry’s point of view is embedded deep in the product, not announced on the packaging. Its biases are quietly built into algorithms, reflected in platform rules, expressed in code few of us can understand and fewer of us will ever read.

But what if it actually is evil? Or what if it’s not evil but just immature, unreflective, and uncompassionate? And what if that’s the culture that designs the digital services the rest of us have to use?

The technology industry’s power is vast, and the way that power is expressed is opaque, so the only real assurance you can have that your interests and needs are being considered is to be in the room when the decisions are made and the code is written. But tech as an industry is unrepresentative of the people it serves and unaccountable in the way it serves them, and so there’s very little confidence among any group that the people in the room are the right ones.

Source: Vox (read the entire article by Ezra Klein)



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Google’s #AI moonshot

sundar-pichai-fast-company

Searcher-in-chief: Google CEO Sundar Pichai

“Building general artificial intelligence in a way that helps people meaningfully—I think the word moonshot is an understatement for that,” Pichai says, sounding startled that anyone might think otherwise. “I would say it’s as big as it gets.”

Officially, Google has always advocated for collaboration. But in the past, as it encouraged individual units to shape their own destinies, the company sometimes operated more like a myriad of fiefdoms. Now, Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.

To make sure that future gadgets are built for the AI-first era, Pichai has collected everything relating to hardware into a single group and hired Rick Osterloh to run it.

BUILD NOW, MONETIZE LATER

Jen Fitzpatrick, VP, Geo: "The Google Assistant wouldn't exist without Sundar—it's a core part of his vision for how we're bringing all of Google together."

Jen Fitzpatrick, VP, Geo: “The Google Assistant wouldn’t exist without Sundar—it’s a core part of his vision for how we’re bringing all of Google together.”

If Google Assistant is indeed the evolution of Google search, it means that the company must aspire to turn it into a business with the potential to be huge in terms of profits as well as usage. How it will do that remains unclear, especially since Assistant is often provided in the form of a spoken conversation, a medium that doesn’t lend itself to the text ads that made Google rich.

“I’ve always felt if you solve problems for users in meaningful ways, there will become value as part of solving that equation,” Pichai argues. “Inherently, a lot of what people are looking for is also commercial in nature. It’ll tend to work out fine in the long run.”

“When you can align people to common goals, you truly get a multiplicative effect in an organization,” he tells me as we sit on a couch in Sundar’s Huddle after his Google Photos meeting. “The inverse is also true, if people are at odds with each other.” He is, as usual, smiling.

The company’s aim, he says, is to create products “that will affect the lives of billions of users, and that they’ll use a lot. Those are the kind of meaningful problems we want to work on.”

Source: Fast Company

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How Deep Learning is making AI prejudiced

Bloggers note: The authors of this research paper show what they refer to as “machine prejudice” and how it derives so fundamentally from human culture. 

“Concerns about machine prejudice are now coming to the fore–concerns that our historic biases and prejudices are being reified in machines,” they write. “Documented cases of automated prejudice range from online advertising (Sweeney, 2013) to criminal sentencing (Angwin et al., 2016).”

Following are a few excerpts: 

machine-prejudiceAbstract

“Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

Discussion

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.

Awareness is better than blindness

“… where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

“Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …”

Click here to download the pdf of the report
Semantics derived automatically from language corpora necessarily contain human biases
Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan

1 Princeton University
2 University of Bath
Draft date August 31, 2016.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Grandma? Now you can see the bias in the data …

“Just type the word grandma in your favorite search engine image search and you will see the bias in the data, in the picture that is returned  … you will see the race bias.” — Fei-Fei Li, Professor of Computer Science, Stanford University, speaking at the White House Frontiers Conference

Google image search for Grandma 

google-grandmas

Bing image search for Grandma

grandma-bing

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

We are evolving to an AI first world

“We are at a seminal moment in computing … we are evolving from a mobile first to an AI first world,” says Sundar Pichai.

“Our goal is to build a personal Google for each and every user … We want to build each user, his or her own individual Google.”

Watch 4 mins of Sundar Pichai’s key comments about the role of AI in our lives and how a personal Google for each of us will work. 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence is quickly becoming as biased as we are

ai-bias

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:women-professional-hair-styles

 … you could probably pat Google on the back and say ‘job well done.’ That is, until you try searching for ‘unprofessional women’s hairstyles’ and find this:

women-unprofessional-hair-styles

It’s not new. In fact, Boing Boing spotted this back in April.

What’s concerning though, is just how much of our lives we’re on the verge of handing over to artificial intelligence. With today’s deep learning algorithms, the ‘training’ of this AI is often as much a product of our collective hive mind as it is programming.

Artificial intelligence, in fact, is using our collective thoughts to train the next generation of automation technologies. All the while, it’s picking up our biases and making them more visible than ever.

This is just the beginning … If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same principles used to train the previous examples. In the future, our neighborhoods will see an increase or decrease in police presence based on data that we already know is biased.

Source: The Next Web

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Time to change “whole system” of search

Kai Yu Baidu

(Photo Wired)

Excerpt from Wired from interview with Kai Yu, CEO of Bidhu, China’s largest search engine:

Today, web searches for products or services give you little more than long list of links, and “then it’s your job to read through all of those webpages to figure out what’s the meaning,” Yu says. But he wants something that works very differently.

We need to fundamentally change the architecture of the whole system,”
– Yu explains.

That means building algorithms that can identify images and understand natural language and then parse the relationships between all the stuff on the web and find exactly what you’re looking for. It other words, it wants algorithms that work like people. Only faster.

Source: Wired, ‘Chinese Google’ opens artificial intelligence lab in Silicon Valley

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

But Watson, is it a good time FOR ME to buy a house?

AI developers at IBM Watson hope some day soon to answer a simple human question, like, ‘Is this a good time to buy a house?’ by having Watson quickly analyze news articles, forum posts, call logs, policy documents and web pages to report ‘window of opportunity’ data.

Phil Lawson: Frankly, Watson, this will be cool when you can do this! But even more important to a human is this question: “Is this a good time FOR ME to buy a house?” When Watson can return this kind of information, based on individual circumstances, this will be awesome. 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail
Aside

Point of this blog on Socializing AI

Artificial Intelligence must be about more than our things. It must be about more than our machines. It must be a way to advance human behavior in complex human situations. But this will require wisdom-powered code. It will require imprinting AI’s genome with social intelligence for human interaction. It must begin right now.”
— Phil Lawson
(read more)

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail