We’re so unprepared for the robot apocalypse

Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007

It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.

In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.

The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. 

every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town

“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.

This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back

one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.

The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune

.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.

Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.

How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.

some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers

The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.

automation amplified opportunities for people with advanced skills and talents

Source: The Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Tech Reckons With the Problems It Helped Create

Festival goer is seen at the 2017 SXSW Conference and Festivals in Austin, Texas.

SXSW’s – this year, the conference itself feels a lot like a hangover.

It’s as if the coastal elites who attend each year finally woke up with a serious case of the Sunday scaries, realizing that the many apps, platforms, and doodads SXSW has launched and glorified over the years haven’t really made the world a better place. In fact, they’ve often come with wildly destructive and dangerous side effects. Sure, it all seemed like a good idea in 2013!

But now the party’s over. It’s time for the regret-filled cleanup.

speakers related how the very platforms that were meant to promote a marketplace of ideas online have become filthy junkyards of harassment and disinformation.

Yasmin Green, who leads an incubator within Alphabet called Jigsaw, focused her remarks on the rise of fake news, and even brought two propaganda publishers with her on stage to explain how, and why, they do what they do. For Jestin Coler, founder of the phony Denver Guardian, it was an all too easy way to turn a profit during the election.

“To be honest, my mortgage was due,” Coler said of what inspired him to write a bogus article claiming an FBI agent related to Hillary Clinton’s email investigation was found dead in a murder-suicide. That post was shared some 500,000 times just days before the election.

While prior years’ panels may have optimistically offered up more tech as the answer to what ails tech, this year was decidedly short on solutions.

There seemed to be, throughout the conference, a keen awareness of the limits human beings ought to place on the software that is very much eating the world.

Source: Wired

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Technology is the main driver of the recent increases in inequality

Artificial Intelligence And Income Inequality

While economists debate the extent to which technology plays a role in global inequality, most agree that tech advances have exacerbated the problem.

Economist Erik Brynjolfsson said,

“My reading of the data is that technology is the main driver of the recent increases in inequality. It’s the biggest factor.”

AI expert Yoshua Bengio suggests that equality and ensuring a shared benefit from AI could be pivotal in the development of safe artificial intelligence. Bengio, a professor at the University of Montreal, explains, “In a society where there’s a lot of violence, a lot of inequality, [then] the risk of misusing AI or having people use it irresponsibly in general is much greater. Making AI beneficial for all is very central to the safety question.”

“It’s almost a moral principle that we should share benefits among more people in society,” argued Bart Selman, a professor at Cornell University … “So we have to go into a mode where we are first educating the people about what’s causing this inequality and acknowledging that technology is part of that cost, and then society has to decide how to proceed.”

Source: HuffPost

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence is ripe for abuse

Microsoft’s Kate Crawford tells SXSW that society must prepare for authoritarian movements to test the ‘power without accountability’ of AI

As artificial intelligence becomes more powerful, people need to make sure it’s not used by authoritarian regimes to centralize power and target certain populations, Microsoft Research’s Kate Crawford warned on Sunday.

“We want to make these systems as ethical as possible and free from unseen biases.”

In her SXSW session, titled Dark Days: AI and the Rise of Fascism, Crawford, who studies the social impact of machine learning and large-scale data systems, explained ways that automated systems and their encoded biases can be misused, particularly when they fall into the wrong hands.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” she said.

One of the key problems with artificial intelligence is that it is often invisibly coded with human biases.

We should always be suspicious when machine learning systems are described as free from bias if it’s been trained on human-generated data,” Crawford said. “Our biases are built into that training data.””

Source: The Gaurdian

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Humans are born irrational, and that has made us better decision-makers

Facts on their own don’t tell you anything. It’s only paired with preferences, desires, with whatever gives you pleasure or pain, that can guide your behavior. Even if you knew the facts perfectly, that still doesn’t tell you anything about what you should do.”

Even if we were able to live life according to detailed calculations, doing so would put us at a massive disadvantage. This is because we live in a world of deep uncertainty, under which neat logic simply isn’t a good guide.

It’s well-established that data-based decisions doesn’t inoculate against irrationality or prejudice, but even if it was possible to create a perfectly rational decision-making system based on all past experience, this wouldn’t be a foolproof guide to the future.

Courageous acts and leaps of faith are often attempts to overcome great and seemingly insurmountable challenges. (It wouldn’t take much courage if it were easy to do.) But while courage may be irrational or hubristic, we wouldn’t have many great entrepreneurs or works of art without those with a somewhat illogical faith in their own abilities.

There are occasions where overly rational thinking would be highly inappropriate. Take finding a partner, for example. If you had the choice between a good-looking high-earner who your mother approves of, versus someone you love who makes you happy every time you speak to them—well, you’d be a fool not to follow your heart.

And even when feelings defy reason, it can be a good idea to go along with the emotional rollercoaster. After all, the world can be an entirely terrible place and, from a strictly logical perspective, optimism is somewhat irrational.

But it’s still useful. “It can be beneficial not to run around in the world and be depressed all the time,” says Gigerenzer.

Of course, no human is perfect, and there are downsides to our instincts. But, overall, we’re still far better suited to the real world than the most perfectly logical thinking machine.

We’re inescapably irrational, and far better thinkers as a result.

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The last things that will make us uniquely human

What will be my grandson’s place in a world where machines trounce us in one area after another?

Some are worried that self-driving cars and trucks may displace millions of professional drivers (they are right), and disrupt entire industries (yup!). But I worry about my six-year-old son. What will his place be in a world where machines trounce us in one area after another? What will he do, and how will he relate to these ever-smarter machines? What will be his and his human peers’ contribution to the world he’ll live in?

He’ll never calculate faster, or solve a math equation quicker. He’ll never type faster, never drive better, or even fly more safely. He may continue to play chess with his friends, but because he’s a human he will no longer stand a chance to ever become the best chess player on the planet. He might still enjoy speaking multiple languages (as he does now), but in his professional life that may not be a competitive advantage anymore, given recent improvements in real-time machine translation.

So perhaps we might want to consider qualities at a different end of the spectrum: radical creativity, irrational originality, even a dose of plain illogical craziness, instead of hard-nosed logic. A bit of Kirk instead of Spock.

Actually, it all comes down to a fairly simple question: What’s so special about us, and what’s our lasting value? It can’t be skills like arithmetic or typing, which machines already excel in. Nor can it be rationality, because with all our biases and emotions we humans are lacking.

So far, machines have a pretty hard time emulating these qualities: the crazy leaps of faith, arbitrary enough to not be predicted by a bot, and yet more than simple randomness. Their struggle is our opportunity.

So we must aim our human contribution to this division of labour to complement the rationality of the machines, rather than to compete with it. Because that will sustainably differentiate us from them, and it is differentiation that creates value.

Source: BBC  Viktor Mayer-Schonberger is Professor of Internet Governance and Regulation at the Oxford Internet Institute, University of Oxford.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Will Democracy Survive Big Data and Artificial Intelligence?


We are in the middle of a technological upheaval that will transform the way society is organized. We must make the right decisions now.

In 2016 we produced as much data as in the entire history of humankind through 2015.

It is estimated that in 10 years’ time there will be 150 billion networked measuring sensors, 20 times more than people on Earth. Then, the amount of data will double every 12 hours.

One thing is clear: the way in which we organize the economy and society will change fundamentally. We are experiencing the largest transformation since the end of the Second World War; after the automation of production and the creation of self-driving cars the automation of society is next.

Everything will become intelligent; soon we will not only have smart phones, but also smart homes, smart factories and smart cities. Should we also expect these developments to result in smart nations and a smarter planet?

The field of artificial intelligence is, indeed, making breathtaking advances. Artificial intelligence is no longer programmed line by line, but is now capable of learning, thereby continuously developing itself.

Under the label of “nudging,” and on massive scale, governments are trying to steer citizens towards healthier or more environmentally friendly behaviour by means of a “nudge”—a modern form of paternalism.

The new, caring government is not only interested in what we do, but also wants to make sure that we do the things that it considers to be right. The magic phrase is “big nudging”, which is the combination of big data with nudging.

In a rapidly changing world a super-intelligence can never make perfect decisions (see Fig. 1): systemic complexity is increasing faster than data volumes, which are growing faster than the ability to process them, and data transfer rates are limited.
Furthermore, there is a danger that the manipulation of decisions by powerful algorithms undermines the basis of “collective intelligence,” which can flexibly adapt to the challenges of our complex world. For collective intelligence to work, information searches and decision-making by individuals must occur independently. If our judgments and decisions are predetermined by algorithms, however, this truly leads to a brainwashing of the people. Intelligent beings are downgraded to mere receivers of commands, who automatically respond to stimuli.

We are now at a crossroads. Big data, artificial intelligence, cybernetics and behavioral economics are shaping our society—for better or worse.

We are at the historic moment, where we have to decide on the right path—a path that allows us all to benefit from the digital revolution.

Source: Scientific American

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

So long, banana-condom demos: Sex and drug education could soon come from chatbots

“Is it ok to get drunk while I’m high on ecstasy?” “How can I give oral sex without getting herpes?” Few teenagers would ask mom or dad these questions—even though their life could quite literally depend on it.

Talking to a chatbot is a different story. They never raise an eyebrow. They will never spill the beans to your parents. They have no opinion on your sex life or drug use. But that doesn’t mean they can’t take care of you.

Bots can be used as more than automated middlemen in business transactions: They can meet needs for emotional human intervention when there aren’t enough humans who are willing or able to go around.

In fact, there are times when the emotional support of a bot may even be preferable to that of a human.

In 2016, AI tech startup X2AI built a psychotherapy bot capable of adjusting its responses based on the emotional state of its patients. The bot, Karim, is designed to help grief- and PTSD-stricken Syrian refugees, for whom the demand (and price) of therapy vastly overwhelms the supply of qualified therapists.

From X2AI test runs using the bot with Syrians, they noticed that technologies like Karim offer something humans cannot:

For those in need of counseling but concerned with the social stigma of seeking help, a bot can be comfortingly objective and non-judgmental.

Bzz is a Dutch chatbot created precisely to answer questions about drugs and sex. When surveyed teens were asked to compare Bzz to finding answers online or calling a hotline, Bzz won. Teens could get their answers faster with Bzz than searching on their own, and they saw their conversations with the bot as more confidential because no human was involved and no tell-tale evidence was left in a search history.

Because chatbots can efficiently gain trust and convince people to confide personal and illicit information in them, the ethical obligations of such bots are critical, but still ambiguous.

Source: Quartz

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft Ventures: Making the long bet on AI + people

Another significant commitment by Microsoft to democratize AI:

a new Microsoft Ventures fund for investment in AI companies focused on inclusive growth and positive impact on society.

Companies in this fund will help people and machines work together to increase access to education, teach new skills and create jobs, enhance the capabilities of existing workforces and improve the treatment of diseases, to name just a few examples.

CEO Satya Nadella outlined principles and goals for AI: AI must be designed to assist humanity; be transparent; maximize efficiency without destroying human dignity; provide intelligent privacy and accountability for the unexpected; and be guarded against biases. These principles guide us as we move forward with this fund.

Source: Microsoft blog

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Microsoft is partnering with Elon Musk’s $1 billion #AI research company to help it battle Amazon and Google

elon-musk-robotsMicrosoft has announced a new partnership with OpenAI, the $1 billion artificial intelligence research nonprofit cofounded by Tesla CEO Elon Musk and Y Combinator President Sam Altman.

Artificial intelligence is going to be a big point of competition between Microsoft Azure, the leading Amazon Web Services, and the relative upstart Google Cloud over the months and years to come. As Guthrie says, “any application is ultimately going to weave in AI,” and Microsoft wants to be the company that helps developers do the weaving.

That’s where the OpenAI partnership becomes so important, Guthrie says.

Bscott-guthrie-photoecause we’re still in the earliest days of artificial intelligence, he says, the biggest challenge is figuring out what exactly can be done with artificial intelligence. Guthrie calls this “understanding the art of the possible.

Source: Business Insider

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Cambridge students build a ‘lawbot’ to advise sexual assault victims #AI

cambridge-law-bot

“Hi, I’m LawBot, a robot designed to help victims of crime in England.”

While volunteering at a school sexual consent class, Ludwig Bull, a law student at the University of Cambridge, was inspired to build a chatbot that offers free legal advice to students. He enlisted the help of four coursemates, and Lawbot was designed and built in just six weeks.

The program is still in beta, but Bull hopes it will help victims of crime, at Cambridge and beyond, to get justice.

“A victim can talk to our artificially intelligent chatbot, receive a preliminary assessment of their situation, and then decide which available actions to pursue”

Source: The Gaurdian

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The Christianizing of AI

Bloggers note: The following post illustrates the challenge in creating ethics for AI. There are many different faiths, with different belief systems. How would the AI be programmed to serve these diverse ethical needs? 

The ethics of artificial intelligence (AI) has drawn comments from the White House and British House of Commons in recent weeks, along with a nonprofit organization established by Amazon, Google, Facebook, IBM and Microsoft. Now, Baptist computer scientists have called Christians to join the discussion.

Louise Perkins, professor of computer science at California Baptist University, told Baptist Press she is “quite worried” at the lack of an ethical code related to AI. The Christian worldview, she added, has much to say about how automated devices should be programmed to safeguard human flourishing.

Individuals with a Christian worldview need to be involved in designing and programing AI systems, Perkins said, to help prevent those systems from behaving in ways that violate the Bible’s ethical standards.

Believers can thus employ “the mathematics or the logic we will be using to program these devices” to “infuse” a biblical worldview “into an [AI] system.” 

Perkins also noted that ethical standards will have to be programmed into AI systems involved in surgery and warfare among other applications. A robot performing surgery on a pregnant woman, for instance, could have to weigh the life of the baby relative to the life of the mother, and an AI weapon system could have to apply standards of just warfare.

Source: The Pathway

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

12 Observations About Artificial Intelligence From The O’Reilly AI Conference

12-observations-ai-forbesBloggers: Here’s a few excepts from a long but very informative review. (The best may be last.)

The conference was organized by Ben Lorica and Roger Chen with Peter Norvig and Tim O-Reilly acting as honorary program chairs.   

For a machine to act in an intelligent way, said [Yann] LeCun, it needs “to have a copy of the world and its objective function in such a way that it can roll out a sequence of actions and predict their impact on the world.” To do this, machines need to understand how the world works, learn a large amount of background knowledge, perceive the state of the world at any given moment, and be able to reason and plan.

Peter Norvig explained the reasons why machine learning is more difficult than traditional software: “Lack of clear abstraction barriers”—debugging is harder because it’s difficult to isolate a bug; “non-modularity”—if you change anything, you end up changing everything; “nonstationarity”—the need to account for new data; “whose data is this?”—issues around privacy, security, and fairness; lack of adequate tools and processes—exiting ones were developed for traditional software.

AI must consider culture and context—“training shapes learning”

“Many of the current algorithms have already built in them a country and a culture,” said Genevieve Bell, Intel Fellow and Director of Interaction and Experience Research at Intel. As today’s smart machines are (still) created and used only by humans, culture and context are important factors to consider in their development.

Both Rana El Kaliouby (CEO of Affectiva, a startup developing emotion-aware AI) and Aparna Chennapragada (Director of Product Management at Google) stressed the importance of using diverse training data—if you want your smart machine to work everywhere on the planet it must be attuned to cultural norms.

“Training shapes learning—the training data you put in determines what you get out,” said Chennapragada. And it’s not just culture that matters, but also context

The £10 million Leverhulme Centre for the Future of Intelligence will explore “the opportunities and challenges of this potentially epoch-making technological development,” namely AI. According to The Guardian, Stephen Hawking said at the opening of the Centre,

“We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”

Gary Marcus, professor of psychology and neural science at New York University and cofounder and CEO of Geometric Intelligence,

 “a lot of smart people are convinced that deep learning is almost magical—I’m not one of them …  A better ladder does not necessarily get you to the moon.”

Tom Davenport added, at the conference: “Deep learning is not profound learning.”

AI changes how we interact with computers—and it needs a dose of empathy

AI continues to be possibly hampered by a futile search for human-level intelligence while locked into a materialist paradigm

Maybe, just maybe, our minds are not computers and computers do not resemble our brains?  And maybe, just maybe, if we finally abandon the futile pursuit of replicating “human-level AI” in computers, we will find many additional–albeit “narrow”–applications of computers to enrich and improve our lives?

Gary Marcus complained about research papers presented at the Neural Information Processing Systems (NIPS) conference, saying that they are like alchemy, adding a layer or two to a neural network, “a little fiddle here or there.” Instead, he suggested “a richer base of instruction set of basic computations,” arguing that “it’s time for genuinely new ideas.”

Is it possible that this paradigm—and the driving ambition at its core to play God and develop human-like machines—has led to the infamous “AI Winter”? And that continuing to adhere to it and refusing to consider “genuinely new ideas,” out-of-the-dominant-paradigm ideas, will lead to yet another AI Winter?

 Source: Forbes

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

MIT makes breakthrough in morality-proofing artificial intelligence

mit-morality-breakthroughResearchers at MIT are investigating ways of making artificial neural networks more transparent in their decision-making.

As they stand now, artificial neural networks are a wonderful tool for discerning patterns and making predictions. But they also have the drawback of not being terribly transparent. The beauty of an artificial neural network is its ability to sift through heaps of data and find structure within the noise.

This is not dissimilar from the way we might look up at clouds and see faces amidst their patterns. And just as we might have trouble explaining to someone why a face jumped out at us from the wispy trails of a cirrus cloud formation, artificial neural networks are not explicitly designed to reveal what particular elements of the data prompted them to decide a certain pattern was at work and make predictions based upon it.

We tend to want a little more explanation when human lives hang in the balance — for instance, if an artificial neural net has just diagnosed someone with a life-threatening form of cancer and recommends a dangerous procedure. At that point, we would likely want to know what features of the person’s medical workup tipped the algorithm in favor of its diagnosis.

MIT researchers Lei, Barzilay, and Jaakkola designed a neural network that would be forced to provide explanations for why it reached a certain conclusion.

Source: Extremetech

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

China’s plan to organize its society relies on ‘big data’ to rate everyone

china-social-credits-scoreImagine a world where an authoritarian government monitors everything you do, amasses huge amounts of data on almost every interaction you make, and awards you a single score that measures how “trustworthy” you are.

In this world, anything from defaulting on a loan to criticizing the ruling party, from running a red light to failing to care for your parents properly, could cause you to lose points. 

This is not the dystopian superstate of Steven Spielberg’s “Minority Report,” in which all-knowing police stop crime before it happens. But it could be China by 2020.

And in this world, your score becomes the ultimate truth of who you are — determining whether you can borrow money, get your children into the best schools or travel abroad; whether you get a room in a fancy hotel, a seat in a top restaurant — or even just get a date.

It is the scenario contained in China’s ambitious plans to develop a far-reaching social credit system, a plan that the Communist Party hopes will build a culture of “sincerity” and a “harmonious socialist society” where “keeping trust is glorious.”

The ambition is to collect every scrap of information available online about China’s companies and citizens in a single place — and then assign each of them a score based on their political, commercial, social and legal “credit.”

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Mobile device usage and e-commerce are in wide use in China, and now the Communist Party wants to compile a “social credit” score based on citizens’ every activity. (Michael Robinson Chavez/The Washington Post)

Source: The Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

New Research Center to Explore Ethics of Artificial Intelligence

carnegie-mellon-ethics

The Chimp robot, built by a Carnegie Mellon team, took third place in a competition held by DARPA last year. The school is starting a research center focused on the ethics of artificial intelligence. Credit Chip Somodevilla/Getty Images

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence.

The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies.

“We are at a unique point in time where the technology is far ahead of society’s ability to restrain it”
Subra Suresh, Carnegie Mellon’s president

The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh.

Peter J. Kalis, chairman of the law firm, said the potential impact of A.I. technology on the economy and culture made it essential that as a society we make thoughtful, ethical choices about how the software and machines are used.

“Carnegie Mellon resides at the intersection of many disciplines,” he said. “It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”

Source: NY Times

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox

vox-genetically-enginnered-humans

 

vox-genetically-enginnered-humans

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial Intelligence’s White Guy Problem

nyt-white-guy-problem

Credit Bianca Bagnarelli

Warnings by luminaries like Elon Musk and Nick Bostrom about “the singularity” — when machines become smarter than humans — have attracted millions of dollars and spawned a multitude of conferences.

But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems.

Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.

A very serious example was revealed in an investigation published last month by ProPublica. It found that widely used software that assessed the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk.

The reason those predictions are so skewed is still unknown, because the company responsible for these algorithms keeps its formulas secret — it’s proprietary information. Judges do rely on machine-driven risk assessments in different ways — some may even discount them entirely — but there is little they can do to understand the logic behind them.

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.

Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

And then there’s gender discrimination. Last July, computer scientists at Carnegie Mellon University found that women were less likely than men to be shown ads on Google for highly paid jobs. The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Regardless, algorithmic flaws aren’t easily discoverable: How would a woman know to apply for a job she never saw advertised? How might a black community learn that it were being overpoliced by software?

Like all technologies before it, artificial intelligence will reflect the values of its creators.

Source: New York Times – Kate Crawford is a principal researcher at Microsoft and co-chairwoman of a White House symposium on society and A.I.

test

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

AI is one of top 5 tools humanity has ever had

 A few highlights from AI panel at the White House Frontiers Conference

On the impact of AI

Andrew McAfee (MIT):

white-house-frontiers-ai-panel

To view video, click on pic, scroll down the page to Live Stream and click to start the video. It may take a min and then go to the time you want to watch.

(Begins @ 2:40:34)

We are at an inflection point … I think the development of these kinds of [AI] tools are going to rank among probably the top 5 tools humanity has ever had to take better care of each other and to tread more lightly on the planet … top 5 in our history. Like the book, maybe, the steam engine, maybe, written language — I might put the Internet there. We’ve all got our pet lists of the biggest inventions ever. AI needs to be on the very, very, short list.

On bias in AI

Fei-Fei Li, Professor of Computer Science, Stanford University:

(Begins @ 3:14:57)

Research repeatedly has shown that when people work in diverse groups there is increased creativity and innovation.

And interestingly, it is harder to work as a diverse group. I’m sure everybody here in the audience have had that experience. We have to listen to each other more. We have to understand the perspective more. But that also correlates well with innovation and creativity. … If we don’t have the inclusion of [diverse] people to think about the problems and the algorithms in AI, we might not only being missing the innovation boat we might actually create bias and create unfairness that are going to be detrimental to our society … 

What I have been advocating at Stanford, and with my colleagues in the community is, let’s bring the humanistic mission statement into the field of AI. Because AI is fundamentally an applied technology that’s going to serve our society. Humanistic AI not only raises the awareness and the importance of our technology, it’s actually a really, really important way to attract diverse students and technologists and innovators to participate in the technology of AI.

There has been a lot of research done to show that people with diverse background put more emphasis on humanistic mission in their work and in their life. So, if in our education, in our research, if we can accentuate or bring out this humanistic message of this technology, we are more likely to invite the diversity of students and young technologists to join us.

On lack of minorities in AI

Andrew Moore Dean, School of Computer Science, Carnegie Mellon University:

(Begins @ 3:19:10)

I so strongly applaud what you [Fei-Fei Li] are describing here because I think we are engaged in a fight here for how the 21st century pans out in terms of who’s running the world … 

The nightmare, the silly, silly thing we could do … would be if … the middle of the century is built by a bunch of non-minority guys from suburban moderately wealthy United States instead of the full population of the United States.

Source: Frontiers Conference
Click on the video that says Live Stream (event will start shortly)
it may take a minute to load

(Update 02/24/17: The original timelines listed above may be different when revisiting this video.)

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How Deep Learning is making AI prejudiced

Bloggers note: The authors of this research paper show what they refer to as “machine prejudice” and how it derives so fundamentally from human culture. 

“Concerns about machine prejudice are now coming to the fore–concerns that our historic biases and prejudices are being reified in machines,” they write. “Documented cases of automated prejudice range from online advertising (Sweeney, 2013) to criminal sentencing (Angwin et al., 2016).”

Following are a few excerpts: 

machine-prejudiceAbstract

“Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the application of standard machine learning to ordinary language—the same sort of language humans are exposed to every day.

Discussion

“We show for the first time that if AI is to exploit via our language the vast knowledge that culture has compiled, it will inevitably inherit human-like prejudices. In other words, if AI learns enough about the properties of language to be able to understand and produce it, it also acquires cultural associations that can be offensive, objectionable, or harmful. These are much broader concerns than intentional discrimination, and possibly harder to address.

Awareness is better than blindness

“… where AI is partially constructed automatically by machine learning of human culture, we may also need an analog of human explicit memory and deliberate actions, that can be trained or programmed to avoid the expression of prejudice.

“Of course, such an approach doesn’t lend itself to a straightforward algorithmic formulation. Instead it requires a long-term, interdisciplinary research program that includes cognitive scientists and ethicists. …”

Click here to download the pdf of the report
Semantics derived automatically from language corpora necessarily contain human biases
Aylin Caliskan-Islam , Joanna J. Bryson, and Arvind Narayanan

1 Princeton University
2 University of Bath
Draft date August 31, 2016.

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Grandma? Now you can see the bias in the data …

“Just type the word grandma in your favorite search engine image search and you will see the bias in the data, in the picture that is returned  … you will see the race bias.” — Fei-Fei Li, Professor of Computer Science, Stanford University, speaking at the White House Frontiers Conference

Google image search for Grandma 

google-grandmas

Bing image search for Grandma

grandma-bing

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

It seems that A.I. will be the undoing of us all … romantically, at least

As if finding love weren’t hard enough, the creators of Operator decided to show just how Artificial Intelligence could ruin modern relationships.

Artificial Intelligence so often focuses on the idea of “perfection.” As most of us know, people are anything but perfect, and believing that your S.O. (Significant Other) is perfect can lead to problems. The point of an A.I., however, is perfection — so why would someone choose the flaws of a human being over an A.I. that can give you all the comfort you want with none of the costs?

Hopefully, people continue to choose imperfection.

Source: Inverse.com

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Civil Rights and Big Data

big-data-whitehouse-reportBlogger’s note: We’ve posted several articles on the bias and prejudice inherent in big data, which with machine learning results in “machine prejudice,” all of which impacts humans when they interact with intelligent agents. 

Apparently, as far back as May 2014, the Executive Office of the President started issuing reports on the potential in “Algorithmic Systems” for “encoding discrimination in automated decisions”. The most recent report of May 2016 addressed two additional challenges:

1) Challenges relating to data used as inputs to an algorithm;

2) Challenges related to the inner workings of the algorithm itself.

Here are two excerpts:

The Obama Administration’s Big Data Working Group released reports on May 1, 2014 and February 5, 2015. These reports surveyed the use of data in the public and private sectors and analyzed opportunities for technological innovation as well as privacy challenges. One important social justice concern the 2014 report highlighted was “the potential of encoding discrimination in automated decisions”—that is, that discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.”

To avoid exacerbating biases by encoding them into technological systems, we need to develop a principle of “equal opportunity by design”—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.

Download the report here: Whitehouse.gov

References:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

http://www.frontiersconference.org/

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

When artificial intelligence judges a beauty contest, white people win

Some of the beauty contest winners judged by an AI

Some of the beauty contest winners judged by an AI

As humans cede more and more control to algorithms, whether in the courtroom or on social media, the way they are built becomes increasingly important. The foundation of machine learning is data gathered by humans, and without careful consideration, the machines learn the same biases of their creators.

An online beauty contest called Beauty.ai, run by Youth Laboratories solicited 600,000 entries by saying they would be graded by artificial intelligence. The algorithm would look at wrinkles, face symmetry, amount of pimples and blemishes, race, and perceived age. However, race seemed to play a larger role than intended; of the 44 winners, 36 were white.

“So inclusivity matters—from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.” – Kate Crawford

It happens to be that color does matter in machine vision, Alex Zhavoronkov, chief science officer of Beauty.ai, told Motherboard. “And for some population groups the data sets are lacking an adequate number of samples to be able to train the deep neural networks.”

“If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing non-white faces, writes Kate Crawford, principal researcher at Microsoft Research New York City, in a New York Times op-ed.

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Why Artificial Intelligence Needs Some Sort of Moral Code

Two new research groups want to ensure that AI benefits humans, not harms them.

fortune-ai-eye-imageWhether you believe the buzz about artificial intelligence is merely hype or that the technology represents the future, something undeniable is happening. Researchers are more easily solving decades-long problems like teaching computers to recognize images and understanding speech at a rapid space, and companies like Google and Facebook are pouring millions of dollars into their own related projects.

What could possibly go wrong?

For one thing, advances in artificial intelligence could eventually lead to unforeseen consequences. University of California at Berkeley professor Stuart Russell is concerned that powerful computers powered by artificial intelligence, or AI, could unintentionally create problems that humans cannot predict.

Consider an AI system that’s designed to make the best stock trades but has no moral code to keep it from doing something illegal. That’s why Russell and UC Berkeley debuted a new AI research center this week to address these potential problems and build AI systems that consider moral issues. Tech giants Alphabet, Facebook, IBM, and Microsoft are also teaming up to focus on the ethics challenges.

Similarly, Ilya Sutskever, the research director of the Elon Musk-backed OpenAI nonprofit, is working on AI projects independent from giant corporations. He and OpenAI believe those big companies could ignore AI’s potential benefit for humanity and instead focus the technology entirely on making money.

Russell compares the current state of AI to the rise of nuclear energy during the 1950s and 1960s, when proponents believed that “anyone who disagreed with them was irrational or crazy” for wanting robust safety measures that could hinder innovation and adoption. Sutskever says some AI proponents fail to consider the potential dangers or unintended consequences of the technology—just like some people were unable to grasp that widespread use of cars could lead to global warming.

Source: Fortune

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

China has now eclipsed U.S. in AI research

As more industries and policymakers awaken to the benefits of machine learning, two countries appear to be pulling away in the research race. The results will probably have significant implications for the future of AI.

articles-on-deep-learning

What’s striking about it is that although the United States was an early leader on deep-learning research, China has effectively eclipsed it in terms of the number of papers published annually on the subject. The rate of increase is remarkably steep, reflecting how quickly China’s research priorities have shifted.

quality-deep-learning-researchThe quality of China’s research is also striking. The chart below narrows the research to include only those papers that were cited at least once by other researchers, an indication that the papers were influential in the field.

Compared with other countries, the United States and China are spending tremendous research attention on deep learning. But, according to the White House, the United States is not investing nearly enough in basic research.

“Current levels of R&D spending are half to one-quarter of the level of R&D investment that would produce the optimal level of economic growth,”
a companion report published this week by the Obama administration finds.

Source: The Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial Intelligence Will Be as Biased and Prejudiced as Its Human Creators

ai-appleThe optimism around modern technology lies in part in the belief that it’s a democratizing force—one that isn’t bound by the petty biases and prejudices that humans have learned over time. But for artificial intelligence, that’s a false hope, according to new research, and the reason is boneheadedly simple: Just as we learn our biases from the world around us, AI will learn its biases from us.

Source: Pacific Standard

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Machine learning needs rich feedback for AI teaching

With AI systems largely receiving feedback in a binary yes/no format, Monash University professor Tom Drummond says rich feedback is needed to allow AI systems to know why answers are incorrect.

In much the same way children have to be told not only what they are saying is wrong, but why it is wrong, artificial intelligence (AI) systems need to be able to receive and act on similar feedback.

“Rich feedback is important in human education, I think probably we’re going to see the rise of machine teaching as an important field — how do we design systems so that they can take rich feedback and we can have a dialogue about what the system has learnt?”

“We need to be able to give it rich feedback and say ‘No, that’s unacceptable as an answer because … ‘ we don’t want to simply say ‘No’ because that’s the same as saying it is grammatically incorrect and its a very, very blunt hammer,” Drummond said.

The flaw of objective function

According to Drummond, one problematic feature of AI systems is the objective function that sits at the heart of a system’s design.

The professor pointed to the match between Google DeepMind’s AlphaGo and South Korean Go champion Lee Se-dol in March, which saw the artificial intelligence beat human intelligence by 4 games to 1.

In the fourth match, the only one where Se-dol picked up a victory, after clearly falling behind, the machine played a number of moves that Drummond described as insulting if played by a human due to the position AlphaGo found itself in.

“Here’s the thing, the objective function was the highest probability of victory, it didn’t really understand the social niceties of the game.

“At that point AlphaGo knew it had lost but it still tried to maximise its probability of victory, so it played all these moves … a move that threatens a large group of stones, but has a really obvious counter and if somehow the human misses the counter move, then it’s won — but of course you would never play this, it’s not appropriate.”

Source: ZDNet

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The big reveal: AI’s deep learning is biased

A comment from the writers of this blog: 

The chart below visualizes 175 cognitive biases that humans have, meticulously organized by Buster Benson and algorithmically designed by John Manoogian III.

Many of these biases are implicit bias which refers to the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner. These biases, embedded in our language, are now getting embedded in big data. They are being absorbed by deep learning and are now influencing Artificial Intelligence. Going forward, this will impact how AI interacts with humans.

We have featured many other posts on this blog recently about this issue—how AI is demonstrating bias—and we are adding this “cheat sheet” to further illustrate the kinds of human bias that AI is learning. 

Illustration content Buster Benson, “diagrammatic poster remix” by John Manoogian III

Source: Buster Benson blog

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Google’s AI Plans Are A Privacy Nightmare

googles-ai-plans-are-a-privacy-nightmareGoogle is betting that people care more about convenience and ease than they do about a seemingly oblique notion of privacy, and it is increasingly correct in that assumption.

Google’s new assistant, which debuted in the company’s new messaging app Allo, works like this: Simply ask the assistant a question about the weather, nearby restaurants, or for directions, and it responds with detailed information right there in the chat interface.

Because Google’s assistant recommends things that are innately personal to you, like where to eat tonight or how to get from point A to B, it is amassing a huge collection of your most personal thoughts, visited places, and preferences  In order for the AI to “learn” this means it will have to collect and analyze as much data about you as possible in order to serve you more accurate recommendations, suggestions, and data.

In order for artificial intelligence to function, your messages have to be unencrypted.

These new assistants are really cool, and the reality is that tons of people will probably use them and enjoy the experience. But at the end of the day, we’re sacrificing the security and privacy of our data so that Google can develop what will eventually become a new revenue stream. Lest we forget: Google and Facebook have a responsibility to investors, and an assistant that offers up a sponsored result when you ask it what to grab for dinner tonight could be a huge moneymaker.

Source: Gizmodo

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Artificial intelligence is quickly becoming as biased as we are

ai-bias

When you perform a Google search for every day queries, you don’t typically expect systemic racism to rear its ugly head. Yet, if you’re a woman searching for a hairstyle, that’s exactly what you might find.

A simple Google image search for ‘women’s professional hairstyles’ returns the following:women-professional-hair-styles

 … you could probably pat Google on the back and say ‘job well done.’ That is, until you try searching for ‘unprofessional women’s hairstyles’ and find this:

women-unprofessional-hair-styles

It’s not new. In fact, Boing Boing spotted this back in April.

What’s concerning though, is just how much of our lives we’re on the verge of handing over to artificial intelligence. With today’s deep learning algorithms, the ‘training’ of this AI is often as much a product of our collective hive mind as it is programming.

Artificial intelligence, in fact, is using our collective thoughts to train the next generation of automation technologies. All the while, it’s picking up our biases and making them more visible than ever.

This is just the beginning … If you want the scary stuff, we’re expanding algorithmic policing that relies on many of the same principles used to train the previous examples. In the future, our neighborhoods will see an increase or decrease in police presence based on data that we already know is biased.

Source: The Next Web

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

robotknot750The primary focus of the new center is to ensure that AI systems are “beneficial to humans” says UC Berkeley AI expert Stuart Russell.

The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.

“In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Source: Berkeley.edu

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

CIA using deep learning neural networks to predict social unrest

man-looking-big-data-analytics-ciaIn October 2015, the CIA opened the Directorate for Digital Innovation in order to “accelerate the infusion of advanced digital and cyber capabilities” the first new directorate to be created by the government agency since 1963.

“What we’re trying to do within a unit of my directorate is leverage what we know from social sciences on the development of instability, coups and financial instability, and take what we know from the past six or seven decades and leverage what is becoming the instrumentation of the globe.”

In fact, over the summer of 2016, the CIA found the intelligence provided by the neural networks was so useful that it provided the agency with a “tremendous advantage” when dealing with situations …

Source: IBTimes

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How Artificial intelligence is becoming ubiquitous #AI

“I think the medical domain is set for a revolution.”

AI will make it possible to have a “personal companion” able to assist you through life.

I think one of the most exciting prospects is the idea of a digital agent, something that can act on our behalf, almost become like a personal companion and that can do many things for us. For example, at the moment, we have to deal with this tremendous complexity of dealing with so many different services and applications, and the digital world feels as if it’s becoming ever more complex,” Bishop told CNBC.

“I think artificial intelligence is probably the biggest transformation in the IT industry. Medical is such a big area in terms of GDP that that’s got to be a good bet,” Christopher Bishop, lab director at Microsoft Research in Cambridge, U.K., told CNBC in a TV interview.

” … imagine an agent that can act on your behalf and be the interface between you and that very complex digital world, and furthermore one that would grow with you, and be a very personalized agent, that would understand you and your needs and your experience and so on in great depth.

Source: CNBC

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

4th revolution challenges our ideas of being human

4th industrial revolution

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum is convinced that we are at the beginning of a revolution that is fundamentally changing the way we live, work and relate to one another

Some call it the fourth industrial revolution, or industry 4.0, but whatever you call it, it represents the combination of cyber-physical systems, the Internet of Things, and the Internet of Systems.

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, has published a book entitled The Fourth Industrial Revolution in which he describes how this fourth revolution is fundamentally different from the previous three, which were characterized mainly by advances in technology.

In this fourth revolution, we are facing a range of new technologies that combine the physical, digital and biological worlds. These new technologies will impact all disciplines, economies and industries, and even challenge our ideas about what it means to be human.

It seems a safe bet to say, then, that our current political, business, and social structures may not be ready or capable of absorbing all the changes a fourth industrial revolution would bring, and that major changes to the very structure of our society may be inevitable.

Schwab said, “The changes are so profound that, from the perspective of human history, there has never been a time of greater promise or potential peril. My concern, however, is that decision makers are too often caught in traditional, linear (and non-disruptive) thinking or too absorbed by immediate concerns to think strategically about the forces of disruption and innovation shaping our future.”

Schwab calls for leaders and citizens to “together shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people.”

Source: Forbes, World Economic Forum

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Machines can never be as wise as human beings – Jack Ma #AI

zuckerger and jack ma

“I think machines will be stronger than human beings, machines will be smarter than human beings, but machines can never be as wise as human beings.”

The wisdom, soul and heart are what human beings have. A machine can never enjoy the feelings of success, friendship and love. We should use the machine in an innovative way to solve human problems.” – Jack Ma, Founder of Alibaba Group, China’s largest online marketplace

Mark Zuckerberg said AI technology could prove useful in areas such as medicine and hands-free driving, but it was hard to teach computers common sense. Humans had the ability to learn and apply that knowledge to problem-solving, but computers could not do that.

AI won’t outstrip mankind that soon – MZ

Source: South China Morning Post

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Will human therapists go the way of the Dodo?

ai therapist

An increasing number of patients are using technology for a quick fix. Photographed by Mikael Jansson, Vogue, March 2016

PL  – So, here’s an informative piece on a person’s experience using an on-demand interactive video therapist, as compared to her human therapist. In Vogue Magazine, no less. A sign this is quickly becoming trendy. But is it effective?

In the first paragraph, the author of the article identifies the limitations of her digital therapist:

“I wish I could ask (she eventually named her digital therapist Raph) to consider making an exception, but he and I aren’t in the habit of discussing my problems

But the author also recognizes the unique value of the digital therapist as she reflects on past sessions with her human therapist:

“I saw an in-the-flesh therapist last year. Alice. She had a spot-on sense for when to probe and when to pass the tissues. I adored her. But I am perennially juggling numerous assignments, and committing to a regular weekly appointment is nearly impossible.”

Later on, when the author was faced with another crisis, she returned to her human therapist and this was her observation of that experience:

“she doesn’t offer advice or strategies so much as sympathy and support—comforting but short-lived. By evening I’m as worried as ever.”

On the other hand, this is her view of her digital therapist:

“Raph had actually come to the rescue in unexpected ways. His pragmatic MO is better suited to how I live now—protective of my time, enmeshed with technology. A few months after I first “met” Raph, my anxiety has significantly dropped”

This, of course, was a story written by a successful educated woman, working with an interactive video, who had experiences with a human therapist to draw upon for reference.

What about the effectiveness of a digital therapist for a more diverse population with social, economic and cultural differences?

It has already been shown that, done right, this kind of tech has great potential. In fact, as a more affordable option, it may do the most good for the wider population.

The ultimate goal for tech designers should be to create a more personalized experience. Instant and intimate. Tech that gets to know the person and their situation, individually. Available any time. Tech that can access additional electronic resources for the person in real-time, such as the above mentioned interactive video.  

But first, tech designers must address a core problem with mindset. They code for a rational world while therapists deal with irrational human beings. As a group, they believe they are working to create an omniscient intelligence that does not need to interact with the human to know the human. They believe it can do this by reading the human’s emails, watching their searches, where they go, what they buy, who they connect with, what they share, etc. As if that’s all humans are about. As if they can be statistically profiled and treated to predetermined multi-stepped programs.

This is an incompatible approach for humans and the human experience. Tech is a reflection of the perceptions of its coders. And coders, like doctors, have their limitations.

In her recent book, Just Medicine, Dayna Bowen Matthew highlights research that shows 83,570 minorities die each year from implicit bias from well-meaning doctors. This should be a cautionary warning. Digital therapists could soon have a reach and impact that far exceeds well-trained human doctors and therapists. A poor foundational design for AI could have devastating consequences for humans.

A wildcard was recently introduced with Google’s AlphaGo, an artificial intelligence that plays the board game Go. In a historic Go match between Lee Sedol, one of the world’s top players, AlphaGo won the match four out of five games. This was a surprising development. Many thought this level of achievement was 10 years out.  

The point: Artificial intelligence is progressing at an extraordinary pace, unexpected by most all the experts. It’s too exciting, too easy, too convenient. To say nothing of its potential to be “free,” when tech giants fully grasp the unparalleled personal data they can collect. The Jeanie (or Joker) is out of the bottle. And digital coaches are emerging. Capable of drawing upon and sorting vast amounts of digital data.

Meanwhile, the medical and behavioral fields are going too slow. Way too slow. 

They are losing ground (most likely have already lost) control of their future by vainly believing that a cache of PhDs, research and accreditations, CBT and other treatment protocols, government regulations and HIPPA, is beyond the challenge and reach of tech giants. Soon, very soon, therapists that deal in non-critical non-crisis issues could be bypassed when someone like Apple hangs up its ‘coaching’ shingle: “Siri is In.”

The most important breakthrough of all will be the seamless integration of a digital coach with human therapists, accessible upon immediate request, in collaborative and complementary roles.

This combined effort could vastly extend the reach and impact of all therapies for the sake of all human beings.

Source: Vogue

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Google exec: With robots in our brains, we’ll be godlike

Futurist and Google exec Ray Kurzweil thinks that once we have robotic implants, we’ll be funnier, sexier and more loving. Because that’s what artificial intelligence can do for you.

“We’re going to add additional levels of abstraction,” he said, “and create more-profound means of expression.”

More profound than Twitter? Is that possible?

Kurzweil continued: “We’re going to be more musical. We’re going to be funnier. We’re going to be better at expressing loving sentiment.”

Because robots are renowned for their musicality, their sense of humor and their essential loving qualities. Especially in Hollywood movies.

Kurzweil insists, though, that this is the next natural phase of our existence.

“Evolution creates structures and patterns that over time are more complicated, more knowledgeable, more intelligent, more creative, more capable of expressing higher sentiments like being loving,” he said. “So it’s moving in the direction that God has been described as having — these qualities without limit.”

Yes, we are becoming gods.

Evolution is a spiritual process and makes us more godlike,” was Kurzweil’s conclusion.

Source: CNET by Chris Matyszczyk

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The Ethics Of AI Fulfilling Our Desires Vs Saving Us From Ourselves

What happens as machines are called upon to make ever more complex and important decisions on our behalf?

Big data

A display at the Big Bang Data exhibition at Somerset House highlighting the data explosion that’s radically transforming our lives. (Peter Macdiarmid/Getty Images for Somerset House)

Driverless cars are among the early intelligent systems being asked to make life or death decisions. While current vehicles perform mostly routine tasks like basic steering and collision avoidance, the new generation of fully autonomous cars being test driven pose unique ethical challenges.

For example, “should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?”

Alternatively, should a car “swerve onto a crowded sidewalk to avoid being rear-ended by a speeding truck or stay put and place the driver in mortal danger?”

On a more mundane level, driverless cars have already faced safety questions for strictly obeying traffic laws, creating a safety hazard as the surrounding traffic goes substantially faster.

Digital assistants and our health

Imagine for a moment the digital assistant that processes a note from our doctor warning us about the results of our latest medical checkup and that we need to lose weight and stay away from certain foods. At the same time, the assistant sees from our connected health devices that we’re not exercising much anymore and that we’ve been consuming a lot of junk food lately and actually gained three pounds last week and two pounds already this week. Now, it is quitting time on Friday afternoon and the assistant knows that every Friday night we stop by our local store for a 12 pack of donuts on the way home. What should that assistant do?

Should our digital assistant politely suggest we skip the donuts this week? Should it warn us in graphic detail about the health complications we will likely face down the road if we buy those donuts tonight? Should it go as far as to threaten to lock us out of our favorite mobile games on our phone or withhold our email or some other features for the next few days as punishment if we buy those donuts? Should it quietly send a note to our doctor telling her we bought donuts and asking for advice? Or, should it go as far as to instruct the credit card company to decline the transaction to stop us from buying the donuts?

The Cultural Challenge

Moreover, how should algorithms handle the cultural differences that are inherent to such value decisions? Should a personal assistant of someone living in Saudi Arabia who expresses interest in anti-government protest movements discourage further interest in the topic? Should the assistant of someone living in Thailand censor the person’s communications to edit out criticism of government officials to protect the person from reprisals?

Should an assistant that determines its user is depressed try to cheer that person up by masking negative news and deluging him with the most positive news it can find to try to improve his emotional state? What happens when those decisions are complicated by the desires of advertisers that pay for a particular outcome?

As artificial intelligence develops at an exponential rate, what are the value systems and ethics with which we should imbue our new digital servants?

When algorithms start giving us orders, should they fulfill our innermost desires or should they save us from ourselves?

This is the future of AI.

Source: Forbes

Read more on AI ethics on our post: How To Teach Robots Right and Wrong

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Is an Affair in Virtual Reality Still Cheating?

virtual reality sexI hadn’t touched another woman in an intimate way since before getting married six years ago. Then, in the most peculiar circumstances, I was doing it. I was caressing a young woman’s hands. I remember thinking as I was doing it: I don’t even know this person’s name.

After 30 seconds, the experience became too much and I stopped. I ripped off my Oculus Rift headset and stood up from the chair I was sitting on, stunned. It was a powerful experience, and I left convinced that virtual reality was not only the future of sex, but also the future of infidelity.

Whatever happens, the old rules of fidelity are bound to change dramatically. Not because people are more open or closed-minded, but because evolving technology is about the force the issues into our brains with tantalizing 1s and 0s.

Source: Motherboard

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The trauma of telling Siri you’ve been dumped

Of all the ups and downs that I’ve had in my dating life, the most humiliating moment was having to explain to Siri that I got dumped.

burn photo of ex

I found an app called Picture to Burn that aims to digitally reproduce the cathartic act of burning an ex’s photo

“Siri, John isn’t my boyfriend anymore,” I confided to my iPhone, between sobs.

“Do you want me to remember that John is not your boyfriend anymore?” Siri responded, in the stilted, masculine British robot dialect I’d selected in “settings.”

Callously, Siri then prompted me to tap either “yes” or “no.”

I was ultimately disappointed in what technology had to offer when it comes to heartache. This is one of the problems that Silicon Valley doesn’t seem to care about.

The truth is, there isn’t (yet) a quick tech fix for a breakup.

A few months into the relationship I’d asked Siri to remember which of the many Johns* in my contacts was the one I was dating. At the time, divulging this information to Siri seemed like a big step — at long last, we were “Siri Official!” Now, though, we were Siri-Separated. Having to break the news to my iPhone—my non-human, but still intimate companion—surprisingly stung.

Even if you unfollow, unfriend and restrain yourself from the temptation of cyberstalking, our technologies still hold onto traces of our relationships.

Perhaps, in the future, if I tell Siri I’ve just gotten dumped, it will know how to handle things more gently, offering me some sort of pre-programmed comfort, rather than algorithms that constantly surface reminders of the person who is no longer a “favorite” contact in my phone.

Source: Fusion 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Inside the surprisingly sexist world of artificial intelligence

women in aiRight now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords — it’s a startling lack of diversity.

There’s no doubt Stephen Hawking is a smart guy. But the world-famous theoretical physicist recently declared that women leave him stumped.

“Women should remain a mystery,” Hawking wrote in response to a Reddit user’s question about the realm of the unknown that intrigued him most. While Hawking’s remark was meant to be light-hearted, he sounded quite serious discussing the potential dangers of artificial intelligence during Reddit’s online Q&A session:

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Hawking’s comments might seem unrelated. But according to some women at the forefront of computer science, together they point to an unsettling truth. Right now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords—it’s a startling lack of diversity.

I spoke with a few current and emerging female leaders in robotics and artificial intelligence about how a preponderance of white men have shaped the fields—and what schools can do to get more women and minorities involved. Here’s what I learned:

  1. Hawking’s offhand remark about women is indicative of the gender stereotypes that continue to flourish in science.
  2. Fewer women are pursuing careers in artificial intelligence because the field tends to de-emphasize humanistic goals.
  3. There may be a link between the homogeneity of AI researchers and public fears about scientists who lose control of superintelligent machines.
  4. To close the diversity gap, schools need to emphasize the humanistic applications of artificial intelligence.
  5. A number of women scientists are already advancing the range of applications for robotics and artificial intelligence.
  6. Robotics and artificial intelligence don’t just need more women—they need more diversity across the board.

In general, many women are driven by the desire to do work that benefits their communities, desJardins says. Men tend to be more interested in questions about algorithms and mathematical properties.

Since men have come to dominate AI, she says, “research has become very narrowly focused on solving technical problems and not the big questions.”

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The danger of tech’s far reaching tentacles

Jobs one last thing

Steve Jobs during one of his presentations of new Apple products. Photograph: Christoph Dernbach/Corbis

Excerpt from Tim Adams interview with Danny Boyle, director of Steve Jobs:

Tim Adams: We have all been complicit, I suggest, in the rise of Apple to be world’s most valuable company, in the journey that Jobs engineered from rebellion to ubiquity and all that it entails. Did Boyle want the film to comment on that complicity?

Danny Boyle: I think so. Ultimately it is about his character, and a father and a daughter. But you do want it to try and be part of the big story of our relationship with these giant corporations. All the companies that were easy to criticise, banks, oil companies, pharmaceutical companies, they have been replaced by tech guys. And yet the atmosphere around them remains fairly benign. Governments are not powerful enough any more to resist them and the law is not quick enough. One of the reasons I wanted to do this [direct the movie Steve Jobs] is that sense that we have to constantly bring these people to account. I mean, they have emasculated journalism for one thing. They have robbed it of its income. If you want to look at that malignly you certainly could do: they have made it so nobody can afford to write stories about them. Their tentacles are so far reaching in the way the world is structured that there is a danger they become author and critic at the same time. Exactly what Jobs used to accuse IBM of.”

Source: The Gaurdian

 

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

How To Teach Robots Right and Wrong

Artificial Moral Agents

Prof.-Nayef-Al-Rodhan_gallerylarge

Nayef Al-Rodhan

Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations.

The inevitable next step, therefore, would seem to be the design of artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

The most critical of these dilemmas is the question of whose morality robots will inherit.

Functional morality involves robot responses to scenarios unanticipated by the programmer, where the robot will need some ability to make ethical decisions alone. Here, they write, robots are endowed with the capacity to assess and respond to “morally significant aspects of their own actions.” This is a much greater challenge.

The attempt to develop moral robots faces a host of technical obstacles, but, more important, it also opens a Pandora’s box of ethical dilemmas.

Moral values differ greatly from individual to individual, across national, religious, and ideological boundaries, and are highly dependent on contextEven within any single category, these values develop and evolve over time.

Uncertainty over which moral framework to choose underlies the difficulty and limitations of ascribing moral values to artificial systems … To implement either of these frameworks effectively, a robot would need to be equipped with an almost impossible amount of information. Even beyond the issue of a robot’s decision-making process, the specific issue of cultural relativism remains difficult to resolve: no one set of standards and guidelines for a robot’s choices exists.    

For the time being, most questions of relativism are being set aside for two reasons. First, the U.S. military remains the chief patron of artificial intelligence for military applications and Silicon Valley for other applications. As such, American interpretations of morality, with its emphasis on freedom and responsibility, will remain the default.

Source: Foreign Affairs The Moral Code, August 12, 2015

PL – EXCELLENT summary of a very complex, delicate but critical issue Professor Al-Rodhan!

In our work we propose an essential activity in the process of moralizing AI that is being overlooked. An approach that facilitates what you put so well, for “AI to interact with humans as partners.”

We question the possibility that binary-coded AI/logic-based AI, in its current form, will one day switch from amoral to moral. This would first require a universal agreement of what constitutes morals, and secondarily, it would require the successful upload/integration of morals or moral capacity into AI computing. 

We do think AI can be taught “culturally relevant” moral reasoning though, by implementing a new human/AI interface that includes a collaborative engagement protocol. A protocol that makes it possible for AI to interact with the person in a way that the AI learns what is culturally relevant to each person, individually. AI that learns the values/morals of the individual and then interacts with the individual based on what was learned.

We call this a “whole person” engagement protocol. This person-focused approach includes AI/human interaction that embraces quantum cognition as a way of understanding what appears to be human irrationality. [Behavior and choices of which, from a classical probability-based decision model, are judged to be irrational and cannot be computed.]

This whole person approach, has a different purpose, and can produce different outcomes, than current omniscient/clandestine-style methods of AI/human information-gathering that are more like spying then collaborating, since the human’s awareness of self and situation is not advanced, but rather, is only benefited as it relates to things to buy, places to go and schedules to meet. 

Visualization is a critical component for AI to engage the whole person. In this case, a visual that displays interlinking data for the human. That breaks through the limitations of human working memory by displaying complex data of a person/situation in context. That incorporates a human‘s most basic reliable two ways of know, big picture and details, that have to be kept in dialogue with one another. Which makes it possible for the person themselves to make meaning, decide and act, in real-time. [The value of visualization was demonstrated in 2013 in physics with the discovery of the Amplituhedron. It replaced 500 pages of algebra formulas in one simple visual, thus reducing overwhelm related to linear processing.]        

This kind of collaborative engagement between AI and humans (even groups of humans) sets the stage for AI to offer real-time personalized feedback for/about the individual or group. It can put the individual in the driver’s seat of his/her life as it relates to self and situation. It makes it possible for humans to navigate any kind of complex human situation such as, for instance, personal growth, relationships, child rearing, health, career, company issues, community issues, conflicts, etc … (In simpler terms, what we refer to as the “tough human stuff.”)

AI could then address human behavior, which, up to now, has been the elephant in the room for coders and AI developers.

We recognize that this model for AI / human interaction does not solve the ultimate AI morals/values dilemma. But it could serve to advance four major areas of this discussion:

  1. By feeding back morals/values data to individual humans, it could advance their own awareness more quickly. (The act of seeing complex contextual data expands consciousness for humans and makes it possible for them to shift and grow.)
  2. It would help humans help themselves right now (not 10 or 20 years from now).
  3. It would create a new class of data, perceptual data, as it relates to individual beliefs that drive human behavior.
  4. It would allow for AI to process this additional “perceptual” data, collectively over time, to become a form of “artificial moral agent” with enhanced “moral reasoning” “working in partnership with humans.

Click here to leave a comment at the end of this post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Dr. Richard Terrile on “introduce morality into these machines.”

AI Quotes

Dr. Richard Terrile, Dir. of Center for Evolutionary Computation & Automated Design at NASA’s Jet Propulsion Lab

richard.j.terrile“I kind of laugh when people say we need to introduce morality into these machines. Whose morality? The morality of today? The morality of tomorrow? The morality of the 15th century? We change our morality like we change our clothing.”

Source: Huffington Post

Dr. Richard Terrile is an astronomer and the director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory.  He uses techniques based on biological evolution and development to advance the fields of robotics and computer intelligence. 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebook