Google’s #AI moonshot

sundar-pichai-fast-company

Searcher-in-chief: Google CEO Sundar Pichai

“Building general artificial intelligence in a way that helps people meaningfully—I think the word moonshot is an understatement for that,” Pichai says, sounding startled that anyone might think otherwise. “I would say it’s as big as it gets.”

Officially, Google has always advocated for collaboration. But in the past, as it encouraged individual units to shape their own destinies, the company sometimes operated more like a myriad of fiefdoms. Now, Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.

To make sure that future gadgets are built for the AI-first era, Pichai has collected everything relating to hardware into a single group and hired Rick Osterloh to run it.

BUILD NOW, MONETIZE LATER

Jen Fitzpatrick, VP, Geo: "The Google Assistant wouldn't exist without Sundar—it's a core part of his vision for how we're bringing all of Google together."

Jen Fitzpatrick, VP, Geo: “The Google Assistant wouldn’t exist without Sundar—it’s a core part of his vision for how we’re bringing all of Google together.”

If Google Assistant is indeed the evolution of Google search, it means that the company must aspire to turn it into a business with the potential to be huge in terms of profits as well as usage. How it will do that remains unclear, especially since Assistant is often provided in the form of a spoken conversation, a medium that doesn’t lend itself to the text ads that made Google rich.

“I’ve always felt if you solve problems for users in meaningful ways, there will become value as part of solving that equation,” Pichai argues. “Inherently, a lot of what people are looking for is also commercial in nature. It’ll tend to work out fine in the long run.”

“When you can align people to common goals, you truly get a multiplicative effect in an organization,” he tells me as we sit on a couch in Sundar’s Huddle after his Google Photos meeting. “The inverse is also true, if people are at odds with each other.” He is, as usual, smiling.

The company’s aim, he says, is to create products “that will affect the lives of billions of users, and that they’ll use a lot. Those are the kind of meaningful problems we want to work on.”

Source: Fast Company

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Software is the future of healthcare “digital therapeutics” instead of a pill

vjiay-pandeWe’ll start to use “digital therapeutics” instead of getting a prescription to take a pill. Services that already exist — like behavioral therapies — might be able to scale better with the help of software, rather than be confined to in-person, brick-and-mortar locations.

Vijay Pande, a general partner at Andreessen Horowitz, runs the firm’s bio fund.

Source: Business Insider
Why an investor at Andreessen Horowitz thinks software is the future of healthcare
FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Christianizing of AI

Bloggers note: The following post illustrates the challenge in creating ethics for AI. There are many different faiths, with different belief systems. How would the AI be programmed to serve these diverse ethical needs? 

The ethics of artificial intelligence (AI) has drawn comments from the White House and British House of Commons in recent weeks, along with a nonprofit organization established by Amazon, Google, Facebook, IBM and Microsoft. Now, Baptist computer scientists have called Christians to join the discussion.

Louise Perkins, professor of computer science at California Baptist University, told Baptist Press she is “quite worried” at the lack of an ethical code related to AI. The Christian worldview, she added, has much to say about how automated devices should be programmed to safeguard human flourishing.

Individuals with a Christian worldview need to be involved in designing and programing AI systems, Perkins said, to help prevent those systems from behaving in ways that violate the Bible’s ethical standards.

Believers can thus employ “the mathematics or the logic we will be using to program these devices” to “infuse” a biblical worldview “into an [AI] system.” 

Perkins also noted that ethical standards will have to be programmed into AI systems involved in surgery and warfare among other applications. A robot performing surgery on a pregnant woman, for instance, could have to weigh the life of the baby relative to the life of the mother, and an AI weapon system could have to apply standards of just warfare.

Source: The Pathway

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

12 Observations About Artificial Intelligence From The O’Reilly AI Conference

12-observations-ai-forbesBloggers: Here’s a few excepts from a long but very informative review. (The best may be last.)

The conference was organized by Ben Lorica and Roger Chen with Peter Norvig and Tim O-Reilly acting as honorary program chairs.   

For a machine to act in an intelligent way, said [Yann] LeCun, it needs “to have a copy of the world and its objective function in such a way that it can roll out a sequence of actions and predict their impact on the world.” To do this, machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan.

Peter Norvig explained the reasons why machine learning is more difficult than traditional software: “Lack of clear abstraction barriers”—debugging is harder because it’s difficult to isolate a bug; “non-modularity”—if you change anything, you end up changing everything; “nonstationarity”—the need to account for new data; “whose data is this?”—issues around privacy, security, and fairness; lack of adequate tools and processes—exiting ones were developed for traditional software.

AI must consider culture and context—“training shapes learning”

“Many of the current algorithms have already built in them a country and a culture,” said Genevieve Bell, Intel Fellow and Director of Interaction and Experience Research at Intel. As today’s smart machines are (still) created and used only by humans, culture and context are important factors to consider in their development.

Both Rana El Kaliouby (CEO of Affectiva, a startup developing emotion-aware AI) and Aparna Chennapragada (Director of Product Management at Google) stressed the importance of using diverse training data—if you want your smart machine to work everywhere on the planet it must be attuned to cultural norms.

“Training shapes learning—the training data you put in determines what you get out,” said Chennapragada. And it’s not just culture that matters, but also context

The £10 million Leverhulme Centre for the Future of Intelligence will explore “the opportunities and challenges of this potentially epoch-making technological development,” namely AI. According to The Guardian, Stephen Hawking said at the opening of the Centre,

“We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

Gary Marcus, professor of psychology and neural science at New York University and cofounder and CEO of Geometric Intelligence,

 “a lot of smart people are convinced that deep learning is almost magical—I’m not one of them …  A better ladder does not necessarily get you to the moon.”

Tom Davenport added, at the conference: “Deep learning is not profound learning.”

AI changes how we interact with computers—and it needs a dose of empathy

AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm

Maybe, just maybe, our minds are not computers and computers do not resemble our brains?  And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional–albeit “narrow”–applications of computers to enrich and improve our lives?

Gary Marcus complained about research papers presented at the Neural Information Processing Systems (NIPS) conference, saying that they are like alchemy, adding a layer or two to a neural network, “a little fiddle here or there.” Instead, he suggested “a richer base of instruction set of basic computations,” arguing that “it’s time for genuinely new ideas.”

Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous “AI Winter”? And that continuing to adhere to it and refusing to consider “genuinely new ideas,” out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter?

 Source: Forbes

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

MIT makes breakthrough in morality-proofing artificial intelligence

mit-morality-breakthroughResearchers at MIT are investigating ways of making artificial neural networks more transparent in their decision-making.

As they stand now, artificial neural networks are a wonderful tool for discerning patterns and making predictions. But they also have the drawback of not being terribly transparent. The beauty of an artificial neural network is its ability to sift through heaps of data and find structure within the noise.

This is not dissimilar from the way we might look up at clouds and see faces amidst their patterns. And just as we might have trouble explaining to someone why a face jumped out at us from the wispy trails of a cirrus cloud formation, artificial neural networks are not explicitly designed to reveal what particular elements of the data prompted them to decide a certain pattern was at work and make predictions based upon it.

We tend to want a little more explanation when human lives hang in the balance — for instance, if an artificial neural net has just diagnosed someone with a life-threatening form of cancer and recommends a dangerous procedure. At that point, we would likely want to know what features of the person’s medical workup tipped the algorithm in favor of its diagnosis.

MIT researchers Lei, Barzilay, and Jaakkola designed a neural network that would be forced to provide explanations for why it reached a certain conclusion.

Source: Extremetech

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

China’s plan to organize its society relies on ‘big data’ to rate everyone

china-social-credits-scoreImagine a world where an authoritarian government monitors everything you do, amasses huge amounts of data on almost every interaction you make, and awards you a single score that measures how “trustworthy” you are.

In this world, anything from defaulting on a loan to criticizing the ruling party, from running a red light to failing to care for your parents properly, could cause you to lose points. 

This is not the dystopian superstate of Steven Spielberg’s “Minority Report,” in which all-knowing police stop crime before it happens. But it could be China by 2020.

And in this world, your score becomes the ultimate truth of who you are — determining whether you can borrow money, get your children into the best schools or travel abroad; whether you get a room in a fancy hotel, a seat in a top restaurant — or even just get a date.

It is the scenario contained in China’s ambitious plans to develop a far-reaching social credit system, a plan that the Communist Party hopes will build a culture of “sincerity” and a “harmonious socialist society” where “keeping trust is glorious.”

The ambition is to collect every scrap of information available online about China’s companies and citizens in a single place — and then assign each of them a score based on their political, commercial, social and legal “credit.”

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

New Research Center to Explore Ethics of Artificial Intelligence

carnegie-mellon-ethics

The Chimp robot, built by a Carnegie Mellon team, took third place in a competition held by DARPA last year. The school is starting a research center focused on the ethics of artificial intelligence. Credit Chip Somodevilla/Getty Images

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence.

The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies.

“We are at a unique point in time where the technology is far ahead of society’s ability to restrain it”
Subra Suresh, Carnegie Mellon’s president

The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh.

Peter J. Kalis, chairman of the law firm, said the potential impact of A.I. technology on the economy and culture made it essential that as a society we make thoughtful, ethical choices about how the software and machines are used.

“Carnegie Mellon resides at the intersection of many disciplines,” he said. “It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”

Source: NY Times

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox

vox-genetically-enginnered-humans

 

vox-genetically-enginnered-humans

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence’s White Guy Problem

nyt-white-guy-problem

Credit Bianca Bagnarelli

Warnings by luminaries like Elon Musk and Nick Bostrom about “the singularity” — when machines become smarter than humans — have attracted millions of dollars and spawned a multitude of conferences.

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems.

Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.

Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

Like all technologies before it, artificial intelligence will reflect the values of its creators.

Source: New York Times – Kate Crawford is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and A.I.

test

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Deep Learning is making AI prejudiced

Bloggers note: The authors of this research paper show what they refer to as “machine prejudice” and how it derives so fundamentally from human culture. 

“Concerns about machine prejudice are now coming to the fore–concerns that our historic biases and prejudices are being reified in machines,” they write. “Documented cases of automated prejudice range from online advertising (Sweeney, 2013) to criminal sentencing (Angwin et al., 2016).”

Following are a few excerpts: 

machine-prejudiceAbstract

“Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

Discussion

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.

Awareness is better than blindness

“… where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

“Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …”

Click here to download the pdf of the report
Semantics derived automatically from language corpora necessarily contain human biases
Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan

1 Princeton University
2 University of Bath
Draft date August 31, 2016.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Grandma? Now you can see the bias in the data …

“Just type the word grandma in your favorite search engine image search and you will see the bias in the data, in the picture that is returned  … you will see the race bias.” — Fei-Fei Li, Professor of Computer Science, Stanford University, speaking at the White House Frontiers Conference

Google image search for Grandma 

google-grandmas

Bing image search for Grandma

grandma-bing

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

It seems that A.I. will be the undoing of us all … romantically, at least

As if finding love weren’t hard enough, the creators of Operator decided to show just how Artificial Intelligence could ruin modern relationships.

Artificial Intelligence so often focuses on the idea of “perfection.” As most of us know, people are anything but perfect, and believing that your S.O. (Significant Other) is perfect can lead to problems. The point of an A.I., however, is perfection — so why would someone choose the flaws of a human being over an A.I. that can give you all the comfort you want with none of the costs?

Hopefully, people continue to choose imperfection.

Source: Inverse.com

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Civil Rights and Big Data

big-data-whitehouse-reportBlogger’s note: We’ve posted several articles on the bias and prejudice inherent in big data, which with machine learning results in “machine prejudice,” all of which impacts humans when they interact with intelligent agents. 

Apparently, as far back as May 2014, the Executive Office of the President started issuing reports on the potential in “Algorithmic Systems” for “encoding discrimination in automated decisions”. The most recent report of May 2016 addressed two additional challenges:

1) Challenges relating to data used as inputs to an algorithm;

2) Challenges related to the inner workings of the algorithm itself.

Here are two excerpts:

The Obama Administration’s Big Data Working Group released reports on May 1, 2014 and February 5, 2015. These reports surveyed the use of data in the public and private sectors and analyzed opportunities for technological innovation as well as privacy challenges. One important social justice concern the 2014 report highlighted was “the potential of encoding discrimination in automated decisions”—that is, that discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.”

To avoid exacerbating biases by encoding them into technological systems, we need to develop a principle of “equal opportunity by design”—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.

Download the report here: Whitehouse.gov

References:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

http://www.frontiersconference.org/

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

China has now eclipsed U.S. in AI research

As more industries and policymakers awaken to the benefits of machine learning, two countries appear to be pulling away in the research race. The results will probably have significant implications for the future of AI.

articles-on-deep-learning

What’s striking about it is that although the United States was an early leader on deep-learning research, China has effectively eclipsed it in terms of the number of papers published annually on the subject. The rate of increase is remarkably steep, reflecting how quickly China’s research priorities have shifted.

quality-deep-learning-researchThe quality of China’s research is also striking. The chart below narrows the research to include only those papers that were cited at least once by other researchers, an indication that the papers were influential in the field.

Compared with other countries, the United States and China are spending tremendous research attention on deep learning. But, according to the White House, the United States is not investing nearly enough in basic research.

“Current levels of R&D spending are half to one-quarter of the level of R&D investment that would produce the optimal level of economic growth,”
a companion report published this week by the Obama administration finds.

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Machine learning needs rich feedback for AI teaching

With AI systems largely receiving feedback in a binary yes/no format, Monash University professor Tom Drummond says rich feedback is needed to allow AI systems to know why answers are incorrect.

In much the same way children have to be told not only what they are saying is wrong, but why it is wrong, artificial intelligence (AI) systems need to be able to receive and act on similar feedback.

“Rich feedback is important in human education, I think probably we’re going to see the rise of machine teaching as an important field — how do we design systems so that they can take rich feedback and we can have a dialogue about what the system has learnt?”

“We need to be able to give it rich feedback and say ‘No, that’s unacceptable as an answer because … ‘ we don’t want to simply say ‘No’ because that’s the same as saying it is grammatically incorrect and its a very, very blunt hammer,” Drummond said.

The flaw of objective function

According to Drummond, one problematic feature of AI systems is the objective function that sits at the heart of a system’s design.

The professor pointed to the match between Google DeepMind’s AlphaGo and South Korean Go champion Lee Se-dol in March, which saw the artificial intelligence beat human intelligence by 4 games to 1.

In the fourth match, the only one where Se-dol picked up a victory, after clearly falling behind, the machine played a number of moves that Drummond described as insulting if played by a human due to the position AlphaGo found itself in.

“Here’s the thing, the objective function was the highest probability of victory, it didn’t really understand the social niceties of the game.

“At that point AlphaGo knew it had lost but it still tried to maximise its probability of victory, so it played all these moves … a move that threatens a large group of stones, but has a really obvious counter and if somehow the human misses the counter move, then it’s won — but of course you would never play this, it’s not appropriate.”

Source: ZDNet

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We are evolving to an AI first world

“We are at a seminal moment in computing … we are evolving from a mobile first to an AI first world,” says Sundar Pichai.

“Our goal is to build a personal Google for each and every user … We want to build each user, his or her own individual Google.”

Watch 4 mins of Sundar Pichai’s key comments about the role of AI in our lives and how a personal Google for each of us will work. 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google’s AI Plans Are A Privacy Nightmare

googles-ai-plans-are-a-privacy-nightmareGoogle is betting that people care more about convenience and ease than they do about a seemingly oblique notion of privacy, and it is increasingly correct in that assumption.

Google’s new assistant, which debuted in the company’s new messaging app Allo, works like this: Simply ask the assistant a question about the weather, nearby restaurants, or for directions, and it responds with detailed information right there in the chat interface.

Because Google’s assistant recommends things that are innately personal to you, like where to eat tonight or how to get from point A to B, it is amassing a huge collection of your most personal thoughts, visited places, and preferences  In order for the AI to “learn” this means it will have to collect and analyze as much data about you as possible in order to serve you more accurate recommendations, suggestions, and data.

In order for artificial intelligence to function, your messages have to be unencrypted.

These new assistants are really cool, and the reality is that tons of people will probably use them and enjoy the experience. But at the end of the day, we’re sacrificing the security and privacy of our data so that Google can develop what will eventually become a new revenue stream. Lest we forget: Google and Facebook have a responsibility to investors, and an assistant that offers up a sponsored result when you ask it what to grab for dinner tonight could be a huge moneymaker.

Source: Gizmodo

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial intelligence is quickly becoming as biased as we are

ai-bias

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:women-professional-hair-styles

 … you could probably pat Google on the back and say ‘job well done.’ That is, until you try searching for ‘unprofessional women’s hairstyles’ and find this:

women-unprofessional-hair-styles

It’s not new. In fact, Boing Boing spotted this back in April.

What’s concerning though, is just how much of our lives we’re on the verge of handing over to artificial intelligence. With today’s deep learning algorithms, the ‘training’ of this AI is often as much a product of our collective hive mind as it is programming.

Artificial intelligence, in fact, is using our collective thoughts to train the next generation of automation technologies. All the while, it’s picking up our biases and making them more visible than ever.

This is just the beginning … If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same principles used to train the previous examples. In the future, our neighborhoods will see an increase or decrease in police presence based on data that we already know is biased.

Source: The Next Web

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

robotknot750The primary focus of the new center is to ensure that AI systems are “beneficial to humans” says UC Berkeley AI expert Stuart Russell.

The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.

“In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Source: Berkeley.edu

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

CIA using deep learning neural networks to predict social unrest

man-looking-big-data-analytics-ciaIn October 2015, the CIA opened the Directorate for Digital Innovation in order to “accelerate the infusion of advanced digital and cyber capabilities” the first new directorate to be created by the government agency since 1963.

“What we’re trying to do within a unit of my directorate is leverage what we know from social sciences on the development of instability, coups and financial instability, and take what we know from the past six or seven decades and leverage what is becoming the instrumentation of the globe.”

In fact, over the summer of 2016, the CIA found the intelligence provided by the neural networks was so useful that it provided the agency with a “tremendous advantage” when dealing with situations …

Source: IBTimes

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

If a robot has enough human characteristics people will lie to it to save hurting its feelings, study says

Humanoid_emotional_robot-

The study, which explored how robots can gain a human’s trust even when they make mistakes, pitted an efficient but inexpressive robot against an error prone, emotional one and monitored how its colleagues treated it.

The researchers found that people are more likely to forgive a personable robot’s mistakes, and will even go so far as lying to the robot to prevent its feelings from being hurt. 

Researchers at the  University of Bristol and University College London created an robot called Bert to help participants with a cooking exercise. Bert was given two large eyes and a mouth, making it capable of looking happy and sad, or not expressing emotion at all.

“Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction,” said Adrianna Hamacher, the researcher behind the project. “But we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them.” 

In one set of tests the robot performed the tasks perfectly and didn’t speak or change its happy expression. In another it would make a mistake that it tried to rectify, but wouldn’t speak or change its expression.

A third version of Bert would communicate with the chef by asking questions such as “Are you ready for the egg?” But when it tried to help, it would drop the egg and reacted with a sad face in which its eyes widened and the corners of its mouth were pulled downwards. It then tried to make up for the fumble by apologising and telling the human that it would try again.

Once the omelette had been made this third Bert asked the human chef if it could have a job in the kitchen. Participants in the trial said they feared that the robot would become sad again if they said no. One of the participants lied to the robot to protect its feelings, while another said they felt emotionally blackmailed.

At the end of the trial the researchers asked the participants which robot they preferred working with. Even though the third robot made mistakes, 15 of the 21 participants picked it as their favourite.

Source: The Telegraph

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Sixty-two percent of organizations will be using artificial intelligence (AI) by 2018, says Narrative Science

AI growth chart 2016Artificial intelligence received $974m of funding as of June 2016 and this figure will only rise with the news that 2016 saw more AI patent applications than ever before.

This year’s funding is set to surpass 2015’s total and CB Insights suggests that 200 AI-focused companies have raised nearly $1.5 billion in equity funding.

AI-stats-by-sector

Artificial Intelligence statistics by sector

AI isn’t limited to the business sphere, in fact the personal robot market, including ‘care-bots’, could reach $17.4bn by 2020.

Care-bots could prove to be a fantastic solution as the world’s populations see an exponential rise in elderly people. Japan is leading the way with a third of government budget on robots devoted to the elderly.

Source: Raconteur: The rise of artificial intelligence in 6 charts

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

4th revolution challenges our ideas of being human

4th industrial revolution

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum is convinced that we are at the beginning of a revolution that is fundamentally changing the way we live, work and relate to one another

Some call it the fourth industrial revolution, or industry 4.0, but whatever you call it, it represents the combination of cyber-physical systems, the Internet of Things, and the Internet of Systems.

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, has published a book entitled The Fourth Industrial Revolution in which he describes how this fourth revolution is fundamentally different from the previous three, which were characterized mainly by advances in technology.

In this fourth revolution, we are facing a range of new technologies that combine the physical, digital and biological worlds. These new technologies will impact all disciplines, economies and industries, and even challenge our ideas about what it means to be human.

It seems a safe bet to say, then, that our current political, business, and social structures may not be ready or capable of absorbing all the changes a fourth industrial revolution would bring, and that major changes to the very structure of our society may be inevitable.

Schwab said, “The changes are so profound that, from the perspective of human history, there has never been a time of greater promise or potential peril. My concern, however, is that decision makers are too often caught in traditional, linear (and non-disruptive) thinking or too absorbed by immediate concerns to think strategically about the forces of disruption and innovation shaping our future.”

Schwab calls for leaders and citizens to “together shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people.”

Source: Forbes, World Economic Forum

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail