Why Ethical Robots Might Not Be Such a Good Idea After All

Nao ethical robotThis week my colleague Dieter Vanderelst presented our paper: “The Dark Side of Ethical Robots” at AIES 2018 in New Orleans.

I blogged about Dieter’s very elegant experiment here, but let me summarize. With two NAO robots he set up a demonstration of an ethical robot helping another robot acting as a proxy human, then showed that with a very simple alteration of the ethical robot’s logic it is transformed into a distinctly unethical robot—behaving either competitively or aggressively toward the proxy human.

Here are our paper’s key conclusions:

The ease of transformation from ethical to unethical robot is hardly surprising. It is a straightforward consequence of the fact that both ethical and unethical behaviors require the same cognitive machinery with—in our implementation—only a subtle difference in the way a single value is calculated. In fact, the difference between an ethical (i.e. seeking the most desirable outcomes for the human) robot and an aggressive (i.e. seeking the least desirable outcomes for the human) robot is a simple negation of this value.

Let us examine the risks associated with ethical robots and if, and how, they might be mitigated. There are three.

  1. First there is the risk that an unscrupulous manufacturer
  2. Perhaps more serious is the risk arising from robots that have user adjustable ethics settings.
  3. But even hard-coded ethics would not guard against undoubtedly the most serious risk of all, which arises when those ethical rules are vulnerable to malicious hacking.

It is very clear that guaranteeing the security of ethical robots is beyond the scope of engineering and will need regulatory and legislative efforts.

Considering the ethical, legal and societal implications of robots, it becomes obvious that robots themselves are not where responsibility lies. Robots are simply smart machines of various kinds and the responsibility to ensure they behave well must always lie with human beings. In other words, we require ethical governance, and this is equally true for robots with or without explicit ethical behaviors.

Two years ago I thought the benefits of ethical robots outweighed the risks. Now I’m not so sure.

I now believe that – even with strong ethical governance—the risks that a robot’s ethics might be compromised by unscrupulous actors are so great as to raise very serious doubts over the wisdom of embedding ethical decision making in real-world safety critical robots, such as driverless cars. Ethical robots might not be such a good idea after all.

Thus, even though we’re calling into question the wisdom of explicitly ethical robots, that doesn’t change the fact that we absolutely must design all robots to minimize the likelihood of ethical harms, in other words we should be designing implicitly ethical robots within Moor’s schema.

Source: IEEE



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Putin: Leader in artificial intelligence will rule world

Putin, speaking Friday at a meeting with students, said the development of AI raises “colossal opportunities and threats that are difficult to predict now.”

[He] warned that “it would be strongly undesirable if someone wins a monopolist position” and promised that Russia would be ready to share its know-how in artificial intelligence with other nations.

The Russian leader predicted that future wars will be fought by drones, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

Source: Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We’re so unprepared for the robot apocalypse

Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007

It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.

In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.

The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. 

every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town

“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.

This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back

one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.

The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune

.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.

Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.

How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.

some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers

The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.

automation amplified opportunities for people with advanced skills and talents

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Burger-flipping robot could spell the end of teen employment

The AI-driven robot ‘Flippy,’ by Miso Robotics, is marketed as a kitchen assistant, rather than a replacement for professionally-trained teens that ponder the meaning of life — or what their crush looks like naked — while awaiting a kitchen timer’s signal that it’s time to flip the meat.

Flippy features a number of different sensors and cameras to identify food objects on the grill. It knows, for example, that burgers and chicken-like patties cook for a different duration. Once done, the machine expertly lifts the burger off the grill and uses its on-board technology to place it gently on a perfectly-browned bun.

The robot doesn’t just work the grill like a master hibachi chef, either. Flippy is capable of deep frying, chopping vegetables, and even plating dishes.

Source: TNW

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox

vox-genetically-enginnered-humans

 

vox-genetically-enginnered-humans

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google teaches robots to learn from each other

article_robolearning-970x350

Google has a plan to speed up robotic learning, and it involves getting robots to share their experiences – via the cloud – and collectively improve their capabilities – via deep learning.

Google researchers decided to combine two recent technology advances. The first is cloud robotics, a concept that envisions robots sharing data and skills with each other through an online repository. The other is machine learning, and in particular, the application of deep neural networks to let robots learn for themselves.

They got the robots to pool their experiences to “build a common model of the skill” that, as the researches explain, was better and faster than what they could have achieved on their own.

As robots begin to master the art of learning it’s inevitable that one day they’ll be able to acquire new skills instantly at much, much faster rates than humans have ever been able to.

Source: Global Futurist

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

This Robot-Made Pizza Is Baked in the Van on the Way to Your Door #AI

Co-Bot Environment

“We have what we call a co-bot environment; so humans and robots working collaboratively,” says Zume Pizza Co-Founder Julia Collins. “Robots do everything from dispensing sauce, to spreading sauce, to placing pizzas in the oven.

Each pie is baked in the delivery van, which means “you get something that is pizzeria fresh, hot and sizzling,”

To see Zume’s pizza-making robots in action, check out the video.

Source: Forbes

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

If a robot has enough human characteristics people will lie to it to save hurting its feelings, study says

Humanoid_emotional_robot-

The study, which explored how robots can gain a human’s trust even when they make mistakes, pitted an efficient but inexpressive robot against an error prone, emotional one and monitored how its colleagues treated it.

The researchers found that people are more likely to forgive a personable robot’s mistakes, and will even go so far as lying to the robot to prevent its feelings from being hurt. 

Researchers at the  University of Bristol and University College London created an robot called Bert to help participants with a cooking exercise. Bert was given two large eyes and a mouth, making it capable of looking happy and sad, or not expressing emotion at all.

“Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction,” said Adrianna Hamacher, the researcher behind the project. “But we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them.” 

In one set of tests the robot performed the tasks perfectly and didn’t speak or change its happy expression. In another it would make a mistake that it tried to rectify, but wouldn’t speak or change its expression.

A third version of Bert would communicate with the chef by asking questions such as “Are you ready for the egg?” But when it tried to help, it would drop the egg and reacted with a sad face in which its eyes widened and the corners of its mouth were pulled downwards. It then tried to make up for the fumble by apologising and telling the human that it would try again.

Once the omelette had been made this third Bert asked the human chef if it could have a job in the kitchen. Participants in the trial said they feared that the robot would become sad again if they said no. One of the participants lied to the robot to protect its feelings, while another said they felt emotionally blackmailed.

At the end of the trial the researchers asked the participants which robot they preferred working with. Even though the third robot made mistakes, 15 of the 21 participants picked it as their favourite.

Source: The Telegraph

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Sixty-two percent of organizations will be using artificial intelligence (AI) by 2018, says Narrative Science

AI growth chart 2016Artificial intelligence received $974m of funding as of June 2016 and this figure will only rise with the news that 2016 saw more AI patent applications than ever before.

This year’s funding is set to surpass 2015’s total and CB Insights suggests that 200 AI-focused companies have raised nearly $1.5 billion in equity funding.

AI-stats-by-sector

Artificial Intelligence statistics by sector

AI isn’t limited to the business sphere, in fact the personal robot market, including ‘care-bots’, could reach $17.4bn by 2020.

Care-bots could prove to be a fantastic solution as the world’s populations see an exponential rise in elderly people. Japan is leading the way with a third of government budget on robots devoted to the elderly.

Source: Raconteur: The rise of artificial intelligence in 6 charts

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Blurring the boundaries between humans and robots

Inspired by Japans unique spiritual beliefs they are blurring the boundaries between humans and robots.

“It is a question of where the soul is. Japanese people have always been told that the soul can exist in everything and anything. So we don’t have any problem with the idea that a robot too has a soul. We don’t make much distinction between humans and robots.” –   Roboticist, Hiroshi Ishiguro

Gemonoid HI-1 is a doppelganger droid built by its male co-creator, roboticist Hiroshi Ishiguro. It is controlled by a motion-capture interface. It can imitate Ishiguro’s body and facial movements, and it can reproduce his voice in sync with his motion and posture. Ishiguro hopes to develop the robot’s human-like presence to such a degree that he could use it to teach classes remotely, lecturing from home while the Geminoid interacts with his classes at Osaka University.

NOTE: this video was published on Youtube Mar 17, 2012

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Meet Pineapple, NKU’s newest artificial intelligence

Pineapple will be used for the next three years for research into social robotics

“Robots are pineapple 1getting more intelligent, more sociable. People are treating robots like humans! People apply humor and social norms to robots,” Dr. [Austin] Lee said. “Even when you think logically there’s no way, no reason, to do that; it’s just a machine without a heart. But because people attach human attributes to robots, I think a robot can be an effective persuader.”

pineapple 2 ausitn lee Anne thompson

Dr. Austin Lee and Anne Thompson with Pineapple the robot

Source: The Northerner

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Rise of the Robot Therapist

 Social robots appear to be particularly effective in helping participants with behaviour problems develop better control over their behaviour

Romeo Vitelli Ph.D.
Romeo Vitelli Ph.D.

In recent years, we’ve seen a rise in different interactive technologies and new ways  of using them to treat various mental problems.  Among other things, this includes online, computer-based, and even virtual reality approaches to cognitive-behavioural therapy. But what about using robots to provide treatment and/or emotional support?  

A new article published in Review of General Psychology provides an overview of some of the latest advances in robotherapy and what we can expect in the future. Written by Cristina Costecu and David O. David of Romania’s Babes-Bolyai University and Bram Vanderborgt of Vrije Universiteit in Belgium, the article covers different studies showing how robotics are transforming personal care.    

What they found was a fairly strong treatment effect for using robots in therapy. Compared to the participants getting robotic therapy, 69 percent of the 581 study participants getting alternative treatment performed more poorly overall.

As for individuals with autism, research has already shown that they can be even more responsive to treatment using social robots than with human therapists due to their difficulty with social cues.

Though getting children with autism to participate in treatment is often frustrating for human therapists, they often respond extremely well to robot-based therapy to help them become more independent.

 Source: Psychology Today

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Owners of first humanoid robot sign agreement not to have sex with it

Pepper agreement

© Yuya Shino / Reuters

A Japanese-based company Softbank, which has created Pepper the robot, has forced customers to sign a document forbidding its owners from using the humanoid for sexual purposes, as well as creating sexy apps.

Even after having paid nearly $2,000 US dollars for the robot, users may have to return Pepper to its makers should they get too personal with the emotional artificial being. 

The clause reads that Pepper must not be used “for sexual activity and actions for the purpose of indecent acts, or acts for the purpose of meeting and dating and making acquaintance of the opposite sex.

Currently Pepper is available for purchase for Japanese residents only and they must be older than 20.

Source: rt.com

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

What IBM Watson must never become

Below is excerpted dialog between an AI powered robot “synth,” called Vera, and her human patient, Dr. Millican, who also happens to be one of the original developers of snyths, from the hit British-American TV drama Humans.

Synth caregiver Vera – Please stick out your tongue.Humans 1

Dr. Millican – Your kidding me

Synth caregiver Vera – Any non- compliance or variation in your medication intake must be reported to your GP

Dr. Millican – You’re not a carer, you’re a jailer. Elster would be sick to his stomach if he saw what you have become. I’m fine now get lost.

Synth caregiver Vera – You should sleep now Dr Millican, your pulse is slightly elevated.

Dr. Millican – Slightly?

Synth caregiver Vera – Your GP will be notified of any refusal to follow recommendations made in your best interests.

Humans poster

From the TV series Humans, episode #2

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Should we be afraid of AI? The military wants a real answer by the end of 2015

The Military’s New Year’s Resolution for Artificial Intelligence

In November, Undersecretary of Defense Frank Kendall quietly issued a memo to the Defense Science Board that could go on to play a role in history.

defense-large for blog on socializing AIThe calls for a new study that  would “identify the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all war-fighting domains…Emphasis will be given to exploration of the bounds-both technological and social-that limit the use of autonomy across a wide range of military operations. The study will ask questions such as: What activities cannot today be performed autonomously? When is human intervention required? What limits the use of autonomy? How might we overcome those limits and expand the use of autonomy in the near term as well as over the next 2 decades?”

A Defense Department official very close to the effort framed the request more simply. “We want a real roadmap for autonomy” he told Defense One. What does that mean, and how would a “real roadmap” influence decision-making in the years ahead? One outcome of the Defense Science Board 2015 Summer Study on Autonomy, assuming the results are eventually made public, is that the report’s findings could refute or confirm some of our worst fears about the future of artificial intelligence.

Source: Defense One

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Robots changing society, humans need not apply

Sobering overview of the real impact robots are having on society and jobs right now. As the narrator says, You may think we have been here before, but we haven’t, this time is different. Not just automation of manual jobs, but lawyers, doctors and white collar workers also. What does it mean? Check it out. This video was posted August 13 and seven days later has 1,805,673 views.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail