Teaching an Algorithm to Understand Right and Wrong


Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review


New Research Center to Explore Ethics of Artificial Intelligence


The Chimp robot, built by a Carnegie Mellon team, took third place in a competition held by DARPA last year. The school is starting a research center focused on the ethics of artificial intelligence. Credit Chip Somodevilla/Getty Images

Carnegie Mellon University plans to announce on Wednesday that it will create a research center that focuses on the ethics of artificial intelligence.

The ethics center, called the K&L Gates Endowment for Ethics and Computational Technologies, is being established at a time of growing international concern about the impact of A.I. technologies.

“We are at a unique point in time where the technology is far ahead of society’s ability to restrain it”
Subra Suresh, Carnegie Mellon’s president

The new center is being created with a $10 million gift from K&L Gates, an international law firm headquartered in Pittsburgh.

Peter J. Kalis, chairman of the law firm, said the potential impact of A.I. technology on the economy and culture made it essential that as a society we make thoughtful, ethical choices about how the software and machines are used.

“Carnegie Mellon resides at the intersection of many disciplines,” he said. “It will take a synthesis of the best thinking of all of these disciplines for society to define the ethical constraints on the emerging A.I. technologies.”

Source: NY Times


Genetically engineered humans will arrive sooner than you think. And we’re not ready

vox-geneticaly-engineered-humansMichael Bess is a historian of science at Vanderbilt University and the author of a fascinating new book, Our Grandchildren Redesigned: Life in a Bioengineered Society. Bess’s book offers a sweeping look at our genetically modified future, a future as terrifying as it is promising.

“What’s happening is bigger than any one of us”

We single out the industrial revolutions of the past as major turning points in human history because they marked major ways in which we changed our surroundings to make our lives easier, better, longer, healthier.

So these are just great landmarks, and I’m comparing this to those big turning points because now the technology, instead of being applied to our surroundings — how we get food for ourselves, how we transport things, how we shelter ourselves, how we communicate with each other — now those technologies are being turned directly on our own biology, on our own bodies and minds.

And so, instead of transforming the world around ourselves to make it more what we wanted it to be, now it’s becoming possible to transform ourselves into whatever it is that we want to be. And there’s both power and danger in that, because people can make terrible miscalculations, and they can alter themselves, maybe in ways that are irreversible, that do irreversible harm to the things that really make their lives worth living.

“We’re going to give ourselves a power that we may not have the wisdom to control very well”

I think most historians of technology … see technology and society as co-constructing each other over time, which gives human beings a much greater space for having a say in which technologies will be pursued and what direction we will take, and how much we choose to have them come into our lives and in what ways.

 Source: Vox





AI is one of top 5 tools humanity has ever had

 A few highlights from AI panel at the White House Frontiers Conference

On the impact of AI

Andrew McAfee (MIT):


To view video, click on pic, scroll down the page to Live Stream and click to start the video. It may take a min and then go to the time you want to watch.

(Begins @ 2:40:34)

We are at an inflection point … I think the development of these kinds of [AI] tools are going to rank among probably the top 5 tools humanity has ever had to take better care of each other and to tread more lightly on the planet … top 5 in our history. Like the book, maybe, the steam engine, maybe, written language — I might put the Internet there. We’ve all got our pet lists of the biggest inventions ever. AI needs to be on the very, very, short list.

On bias in AI

Fei-Fei Li, Professor of Computer Science, Stanford University:

(Begins @ 3:14:57)

Research repeatedly has shown that when people work in diverse groups there is increased creativity and innovation.

And interestingly, it is harder to work as a diverse group. I’m sure everybody here in the audience have had that experience. We have to listen to each other more. We have to understand the perspective more. But that also correlates well with innovation and creativity. … If we don’t have the inclusion of [diverse] people to think about the problems and the algorithms in AI, we might not only being missing the innovation boat we might actually create bias and create unfairness that are going to be detrimental to our society … 

What I have been advocating at Stanford, and with my colleagues in the community is, let’s bring the humanistic mission statement into the field of AI. Because AI is fundamentally an applied technology that’s going to serve our society. Humanistic AI not only raises the awareness and the importance of our technology, it’s actually a really, really important way to attract diverse students and technologists and innovators to participate in the technology of AI.

There has been a lot of research done to show that people with diverse background put more emphasis on humanistic mission in their work and in their life. So, if in our education, in our research, if we can accentuate or bring out this humanistic message of this technology, we are more likely to invite the diversity of students and young technologists to join us.

On lack of minorities in AI

Andrew Moore Dean, School of Computer Science, Carnegie Mellon University:

(Begins @ 3:19:10)

I so strongly applaud what you [Fei-Fei Li] are describing here because I think we are engaged in a fight here for how the 21st century pans out in terms of who’s running the world … 

The nightmare, the silly, silly thing we could do … would be if … the middle of the century is built by a bunch of non-minority guys from suburban moderately wealthy United States instead of the full population of the United States.

Source: Frontiers Conference
Click on the video that says Live Stream (event will start shortly)
it may take a minute to load

(Update 02/24/17: The original timelines listed above may be different when revisiting this video.)


“Big data need big theory too”

This published paper written by Peter V. Coveney, Edward R. Dougherty, Roger R. Highfield


The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry.
Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales.

Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data.

Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.

Source: The Royal Society Publishing


A Legal Definition Of Artificial Intelligence?

Defining the terms: artificial and intelligence

image from Shutterstock

image from Shutterstock

For regulatory purposes, “artificial” is, hopefully, the easy bit. It can simply mean “not occurring in nature or not occurring in the same form in nature”. Here, the alternative given after the “or” allows for the possible future use of modified biological materials.

This, then, leaves the knottier problem of “intelligence”.

Intelligence measures an agent’s ability to achieve goals in a wide range of environments.

Definition proposed by: Marcus Hutter (now at ANU) and Shane Legg (now at Google DeepMind) 

This informal definition signposts things that a regulator could manage, establishing and applying objective measures of ability (as defined) of an entity in one or more environments (as defined). The core focus on achievement of goals also elegantly covers other intelligence-related concepts such as learning, planning and problem solving.

But many hurdles remain.

First, the informal definition may not be directly usable for regulatory purposes because of AIXI’s own underlying constraints. One constraint, often emphasised by Hutter, is that AIXI can only be “approximated” in a computer because of time and space limitations.

Another constraint is that AIXI lacks a “self-model” (but a recently proposed variant called “reflective AIXI” may change that).

Second, for testing and certification purposes, regulators have to be able to treat intelligence as something divisible into many sub-abilities (such as movement, communication, etc.). But this may cut across any definition based on general intelligence.

From a consumer perspective, this is ultimately all a question of drawing the line between a system defined as displaying actual AI, as opposed to being just another programmable box …

Source: Lifehacker

PL: Ouch


Inside the surprisingly sexist world of artificial intelligence

women in aiRight now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords — it’s a startling lack of diversity.

There’s no doubt Stephen Hawking is a smart guy. But the world-famous theoretical physicist recently declared that women leave him stumped.

“Women should remain a mystery,” Hawking wrote in response to a Reddit user’s question about the realm of the unknown that intrigued him most. While Hawking’s remark was meant to be light-hearted, he sounded quite serious discussing the potential dangers of artificial intelligence during Reddit’s online Q&A session:

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Hawking’s comments might seem unrelated. But according to some women at the forefront of computer science, together they point to an unsettling truth. Right now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords—it’s a startling lack of diversity.

I spoke with a few current and emerging female leaders in robotics and artificial intelligence about how a preponderance of white men have shaped the fields—and what schools can do to get more women and minorities involved. Here’s what I learned:

  1. Hawking’s offhand remark about women is indicative of the gender stereotypes that continue to flourish in science.
  2. Fewer women are pursuing careers in artificial intelligence because the field tends to de-emphasize humanistic goals.
  3. There may be a link between the homogeneity of AI researchers and public fears about scientists who lose control of superintelligent machines.
  4. To close the diversity gap, schools need to emphasize the humanistic applications of artificial intelligence.
  5. A number of women scientists are already advancing the range of applications for robotics and artificial intelligence.
  6. Robotics and artificial intelligence don’t just need more women—they need more diversity across the board.

In general, many women are driven by the desire to do work that benefits their communities, desJardins says. Men tend to be more interested in questions about algorithms and mathematical properties.

Since men have come to dominate AI, she says, “research has become very narrowly focused on solving technical problems and not the big questions.”

Source: Quartz


The danger of tech’s far reaching tentacles

Jobs one last thing

Steve Jobs during one of his presentations of new Apple products. Photograph: Christoph Dernbach/Corbis

Excerpt from Tim Adams interview with Danny Boyle, director of Steve Jobs:

Tim Adams: We have all been complicit, I suggest, in the rise of Apple to be world’s most valuable company, in the journey that Jobs engineered from rebellion to ubiquity and all that it entails. Did Boyle want the film to comment on that complicity?

Danny Boyle: I think so. Ultimately it is about his character, and a father and a daughter. But you do want it to try and be part of the big story of our relationship with these giant corporations. All the companies that were easy to criticise, banks, oil companies, pharmaceutical companies, they have been replaced by tech guys. And yet the atmosphere around them remains fairly benign. Governments are not powerful enough any more to resist them and the law is not quick enough. One of the reasons I wanted to do this [direct the movie Steve Jobs] is that sense that we have to constantly bring these people to account. I mean, they have emasculated journalism for one thing. They have robbed it of its income. If you want to look at that malignly you certainly could do: they have made it so nobody can afford to write stories about them. Their tentacles are so far reaching in the way the world is structured that there is a danger they become author and critic at the same time. Exactly what Jobs used to accuse IBM of.”

Source: The Gaurdian




First feature film ever told from the point of view of artificial intelligence

Stephen Hawkings, Elon Musk and Bill Gates will love this one! (Not)

“We made NIGHTMARE CODE to open up a highly relevant conversation, asking how our mastery of computer code is changing our basic human codes of behavior. Do we still control our tools, or are we—willingly—allowing our tools to take control of us?”

The movie synopsis: “Brett Desmond, a genius programmer with a troubled past, is called in to finish a top secret behavior recognition program, ROPER, after the previous lead programmer went insane. But the deeper Brett delves into the code, the more his own behavior begins changing … in increasingly terrifying ways.

“NIGHTMARE CODE came out of something I learned working in video-game development,” Netter says. “Prior to that experience, I thought that any two programmers of comparable skill would write the same program with code that would be 95 percent similar. I learned instead that different programmers come up with vastly different coding solutions, meaning that somewhere deep inside every computer, every mobile phone, is the individual personality of a programmer—expressed as logic.

“But what if this personality, this logic, was sentient? And what if it was extremely pissed off?”

Available on Google Play



What IBM Watson must never become

Below is excerpted dialog between an AI powered robot “synth,” called Vera, and her human patient, Dr. Millican, who also happens to be one of the original developers of snyths, from the hit British-American TV drama Humans.

Synth caregiver Vera – Please stick out your tongue.Humans 1

Dr. Millican – Your kidding me

Synth caregiver Vera – Any non- compliance or variation in your medication intake must be reported to your GP

Dr. Millican – You’re not a carer, you’re a jailer. Elster would be sick to his stomach if he saw what you have become. I’m fine now get lost.

Synth caregiver Vera – You should sleep now Dr Millican, your pulse is slightly elevated.

Dr. Millican – Slightly?

Synth caregiver Vera – Your GP will be notified of any refusal to follow recommendations made in your best interests.

Humans poster

From the TV series Humans, episode #2


Point of this blog on Socializing AI

Artificial Intelligence must be about more than our things. It must be about more than our machines. It must be a way to advance human behavior in complex human situations. But this will require wisdom-powered code. It will require imprinting AI’s genome with social intelligence for human interaction. It must begin right now.”
— Phil Lawson
(read more)