I prefer to be killed by my own stupidity rather than the codified morals of a software engineer

…or the learned morals of an evolving algorithm. SAS CTO Oliver Schabenberger

With the advent of deep learning, machines are beginning to solve problems in a novel way: by writing the algorithms themselves.

The software developer who codifies a solution through programming logic is replaced by a data scientist who defines and trains a deep neural network.

The expert who studied and learned a domain is replaced by a reinforcement learning algorithm that discovers the rules of play from historical data.

We are learning incredible lessons in this process.

But does the rise of such highly sophisticated deep learning mean that machines will soon surpass their makers? They are surpassing us in reliability, accuracy and throughput. But they are not surpassing us in thinking or learning. Not with today’s technology.

The artificial intelligence systems of today learn from data – they learn only from data. These systems cannot grow beyond the limits of the data by creating, innovating or reasoning.

Even a reinforcement learning system that discovers rules of play from past data cannot develop completely new rules or new games. It can apply the rules in a novel and more efficient way, but it does not invent a new game. The machine that learned to play Go better than any human being does not know how to play Poker.

Where to from here?

True intelligence requires creativity, innovation, intuition, independent problem solving, self-awareness and sentience. The systems built based on deep learning do not – and cannot – have these characteristics. These are trained by top-down supervised methods.

We first tell the machine the ground truth, so that it can discover its regularities. They do not grow beyond that.

Source: InformationWeek



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Do we still need human judges in the age of Artificial Intelligence?

Technology and the law are converging, and where they meet new questions arise about the relative roles of artificial and human agents—and the ethical issues involved in the shift from one to the other. While legal technology has largely focused on the activities of the bar, it challenges us to think about its application to the bench as well. In particular,

Could AI replace human judges?

The idea of  AI judges raises important ethical issues around bias and autonomy. AI programs may incorporate the biases of their programmers and the humans they interact with.

But while such programs may replicate existing human biases, the distinguishing feature of AI over an algorithm  is that it can behave in surprising and unintended ways as it ‘learns.’ Eradicating bias therefore becomes even more difficult, though not impossible. Any AI judging program would need to account for, and be tested for, these biases.

Appealing to rationality, the counter-argument is that human judges are already biased, and that AI can be used to improve the way we deal with them and reduce our ignorance. Yet suspicions about AI judges remain, and are already enough of a concern to lead the European Union to promulgate a General Data Protection Regulation which becomes effective in 2018. This Regulation contains

“the right not to be subject to a decision based solely on automated processing”.

As the English utilitarian legal theorist Jeremy Bentham once wrote in An Introduction To The Principles of Morals and Legislation, “in principle and in practice, in a right track and in a wrong one, the rarest of all human qualities is consistency.” With the ability to process far more data and variables in the case record than humans could ever do, an AI judge might be able to outstrip a human one in many cases.

Even so, AI judges may not solve classical questions of legal validity so much as raise new questions about the role of humans, since—if  we believe that ethics and morality in the law are important—then they necessarily lie, or ought to lie, in the domain of human judgment.

In practical terms, if we apply this conclusion to the perspective of American legal theorist Ronald Dworkin, for example, AI could assist with examining the entire breadth and depth of the law, but humans would ultimately choose what they consider a morally-superior interpretation.

The American Judge Richard Posner believes that the immediate use of AI and automation should be restricted to assisting judges in uncovering their own biases and maintaining consistency.

At the heart of these issues is a hugely challenging question: what does it mean to be human in the age of Artificial Intelligence?

Source: Open Democracy

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

So long, banana-condom demos: Sex and drug education could soon come from chatbots

“Is it ok to get drunk while I’m high on ecstasy?” “How can I give oral sex without getting herpes?” Few teenagers would ask mom or dad these questions—even though their life could quite literally depend on it.

Talking to a chatbot is a different story. They never raise an eyebrow. They will never spill the beans to your parents. They have no opinion on your sex life or drug use. But that doesn’t mean they can’t take care of you.

Bots can be used as more than automated middlemen in business transactions: They can meet needs for emotional human intervention when there aren’t enough humans who are willing or able to go around.

In fact, there are times when the emotional support of a bot may even be preferable to that of a human.

In 2016, AI tech startup X2AI built a psychotherapy bot capable of adjusting its responses based on the emotional state of its patients. The bot, Karim, is designed to help grief- and PTSD-stricken Syrian refugees, for whom the demand (and price) of therapy vastly overwhelms the supply of qualified therapists.

From X2AI test runs using the bot with Syrians, they noticed that technologies like Karim offer something humans cannot:

For those in need of counseling but concerned with the social stigma of seeking help, a bot can be comfortingly objective and non-judgmental.

Bzz is a Dutch chatbot created precisely to answer questions about drugs and sex. When surveyed teens were asked to compare Bzz to finding answers online or calling a hotline, Bzz won. Teens could get their answers faster with Bzz than searching on their own, and they saw their conversations with the bot as more confidential because no human was involved and no tell-tale evidence was left in a search history.

Because chatbots can efficiently gain trust and convince people to confide personal and illicit information in them, the ethical obligations of such bots are critical, but still ambiguous.

Source: Quartz

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

JPMorgan software does in seconds what took lawyers 360,000 hours

At JPMorgan, a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of lawyers’ time annually. The software reviews documents in seconds, is less error-prone and never asks for vacation.

COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specialising in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget.

Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety:

though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.

Source: Independent

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google exec: With robots in our brains, we’ll be godlike

Futurist and Google exec Ray Kurzweil thinks that once we have robotic implants, we’ll be funnier, sexier and more loving. Because that’s what artificial intelligence can do for you.

“We’re going to add additional levels of abstraction,” he said, “and create more-profound means of expression.”

More profound than Twitter? Is that possible?

Kurzweil continued: “We’re going to be more musical. We’re going to be funnier. We’re going to be better at expressing loving sentiment.”

Because robots are renowned for their musicality, their sense of humor and their essential loving qualities. Especially in Hollywood movies.

Kurzweil insists, though, that this is the next natural phase of our existence.

“Evolution creates structures and patterns that over time are more complicated, more knowledgeable, more intelligent, more creative, more capable of expressing higher sentiments like being loving,” he said. “So it’s moving in the direction that God has been described as having — these qualities without limit.”

Yes, we are becoming gods.

Evolution is a spiritual process and makes us more godlike,” was Kurzweil’s conclusion.

Source: CNET by Chris Matyszczyk

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Dr. Richard Terrile on “introduce morality into these machines.”

AI Quotes

Dr. Richard Terrile, Dir. of Center for Evolutionary Computation & Automated Design at NASA’s Jet Propulsion Lab

richard.j.terrile“I kind of laugh when people say we need to introduce morality into these machines. Whose morality? The morality of today? The morality of tomorrow? The morality of the 15th century? We change our morality like we change our clothing.”

Source: Huffington Post

Dr. Richard Terrile is an astronomer and the director of the Center for Evolutionary Computation and Automated Design at NASA’s Jet Propulsion Laboratory.  He uses techniques based on biological evolution and development to advance the fields of robotics and computer intelligence. 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Irrational thinking mimics much of what we observe in quantum physics

Quantum Physics Explains Why You Suck at Making Decisions (but what about AI?)

We normally think of physics and psychology inhabiting two very distinct places in science, but when you realize they exist in the same universe, you start to see connections and find out how they can learn from one-another. Case in point: a pair of new studies by researchers at Ohio State University that argue how quantum physics can explain human irrationality and paradoxical thinking — and how this way of thinking can actually be of great benefit.

Conventional problem-solving and decision-making processes often lead on classical probability theory, which outlines how humans make their best choices based on evaluating the probability of good outcomes.

But according to Zheng Joyce Wang, a communications researcher who led both studies, choices that don’t line up with classical probability theory are often labeled “irrational.” Yet, “they’re consistent with quantum theory — and with how people really behave,” she says.

The two new papers suggest that seemingly-irrational thinking mimics much of what we observe in quantum physics, which we normally think of as extremely chaotic and almost hopelessly random.

Quantum-like behavior and choices don’t follow standard, logical processes and outcomes. But like quantum physics, quantum-like behavior and thinking, Wang argues, can help us to understand complex questions better.

Wang argues that before we make a choice, our options are all superpositioned. Each possibility adds a whole new layer of dimensions, making the decision process even more complicated. Under conventional approaches to psychology, the process makes no sense, but under a quantum approach, Wang argues that the decision-making process suddenly becomes clear.

Source: Inverse.com

PL – As noted in other posts on this site, AI is rational, based on logic and following rules. And that has its own complications. (see Google cars post.)

If humans, as these papers suggest, operate in a different space, mimicking much of quantum physics, the question we should be asking ourselves is: What would it take for average humans and machines to COLLABORATE in solution-finding? Particularly, about human behavior and growth — the “tough human stuff,” as we, the writers of this blog, have labeled it. 

Let’s not make this about one or the other. How can humans and machines benefit each other? Is there a way to bridge the divide? We propose there is. 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

First feature film ever told from the point of view of artificial intelligence

Stephen Hawkings, Elon Musk and Bill Gates will love this one! (Not)

“We made NIGHTMARE CODE to open up a highly relevant conversation, asking how our mastery of computer code is changing our basic human codes of behavior. Do we still control our tools, or are we—willingly—allowing our tools to take control of us?”

The movie synopsis: “Brett Desmond, a genius programmer with a troubled past, is called in to finish a top secret behavior recognition program, ROPER, after the previous lead programmer went insane. But the deeper Brett delves into the code, the more his own behavior begins changing … in increasingly terrifying ways.

“NIGHTMARE CODE came out of something I learned working in video-game development,” Netter says. “Prior to that experience, I thought that any two programmers of comparable skill would write the same program with code that would be 95 percent similar. I learned instead that different programmers come up with vastly different coding solutions, meaning that somewhere deep inside every computer, every mobile phone, is the individual personality of a programmer—expressed as logic.

“But what if this personality, this logic, was sentient? And what if it was extremely pissed off?”

Available on Google Play

Fangoria

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI could solve all the world’s problems – seriously?

Richard-Terrile 2

NASA scientist Richard Terrile, who was coincidentally a technical adviser on “Terminator 3: Rise of the Machines” and spends his days trying to develop artificial intelligence, thinks that AI could eventually fix everything from ending world hunger to curing cancer.

“The benefits of AI are that it could solve all the world’s problems. All of them. Seriously. Technology could probably solve all of them in one form or another.”

“I believe it can,” he says. “These very, very advanced information systems, which go way beyond the capabilities of a human, I think are the way to go in actually solving these [problems].”

Source: Huffington Post
NASA Scientist: Artificial Intelligence ‘Could Solve All The World’s Problems’ (If It Doesn’t Terminate Us)


 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail