The idea was to help you and I make better decisions amid cognitive overload

IBM Chairman, President, and Chief Executive Officer Ginni Rometty. PHOTOGRAPHER: STEPHANIE SINCLAIR FOR BLOOMBERG BUSINESSWEEK

If I considered the initials AI, I would have preferred augmented intelligence.

It’s the idea that each of us are going to need help on all important decisions.

A study said on average that a third of your decisions are really great decisions, a third are not optimal, and a third are just wrong. We’ve estimated the market is $2 billion for tools to make better decisions.

That’s what led us all to really calling it cognitive

“Look, we really think this is about man and machine, not man vs. machine. This is an era—really, an era that will play out for decades in front of us.”

We set out to build an AI platform for business.

AI would be vertical. You would train it to know medicine. You would train it to know underwriting of insurance. You would train it to know financial crimes. Train it to know oncology. Train it to know weather. And it isn’t just about billions of data points. In the regulatory world, there aren’t billions of data points. You need to train and interpret something with small amounts of data.

This is really another key point about professional AI. Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.”

What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

It’s our responsibility if we build this stuff to guide it safely into the world.

Source: Bloomberg



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

The Growing #AI Emotion Reading Tech Challenge

PL – The challenge of an AI using Emotion Reading Tech just got dramatically more difficult.

A new study identifies 27 categories of emotion and shows how they blend together in our everyday experience.

Psychology once assumed that most human emotions fall within the universal categories of happiness, sadness, anger, surprise, fear, and disgust. But a new study from Greater Good Science Center faculty director Dacher Keltner suggests that there are at least 27 distinct emotions—and they are intimately connected with each other.

“We found that 27 distinct dimensions, not six, were necessary to account for the way hundreds of people reliably reported feeling in response to each video”

Moreover, in contrast to the notion that each emotional state is an island, the study found that “there are smooth gradients of emotion between, say, awe and peacefulness, horror and sadness, and amusement and adoration,” Keltner said.

“We don’t get finite clusters of emotions in the map because everything is interconnected,” said study lead author Alan Cowen, a doctoral student in neuroscience at UC Berkeley.

“Emotional experiences are so much richer and more nuanced than previously thought.”

Source: Mindful

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Putin: Leader in artificial intelligence will rule world

Putin, speaking Friday at a meeting with students, said the development of AI raises “colossal opportunities and threats that are difficult to predict now.”

[He] warned that “it would be strongly undesirable if someone wins a monopolist position” and promised that Russia would be ready to share its know-how in artificial intelligence with other nations.

The Russian leader predicted that future wars will be fought by drones, and “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”

Source: Washington Post

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

IBM Watson CTO on Why Augmented Intelligence Beats AI

If you look at almost every other tool that has ever been created, our tools tend to be most valuable when they’re amplifying us, when they’re extending our reach, when they’re increasing our strength, when they’re allowing us to do things that we can’t do by ourselves as human beings. That’s really the way that we need to be thinking about AI as well, and to the extent that we actually call it augmented intelligence, not artificial intelligence.

Some time ago we realized that this thing called cognitive computing was really bigger than us, it was bigger than IBM, it was bigger than any one vendor in the industry, it was bigger than any of the one or two different solution areas that we were going to be focused on, and we had to open it up, which is when we shifted from focusing on solutions to really dealing with more of a platform of services, where each service really is individually focused on a different part of the problem space.

what we’re talking about now are a set of services, each of which do something very specific, each of which are trying to deal with a different part of our human experience, and with the idea that anybody building an application, anybody that wants to solve a social or consumer or business problem can do that by taking our services, then composing that into an application.

If the doctor can now make decisions that are more informed, that are based on real evidence, that are supported by the latest facts in science, that are more tailored and specific to the individual patient, it allows them to actually do their job better. For radiologists, it may allow them to see things in the image that they might otherwise miss or get overwhelmed by. It’s not about replacing them. It’s about helping them do their job better.

That’s really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would’ve been by theirselves. That’s really the way we think about it. That’s how we’re evolving the technology. That’s where the economic utility is going to be.

There are lots of things that we as human beings are good at. There’s also a lot of things that we’re not very good, and that’s I think where cognitive computing really starts to make a huge difference, is when it’s able to bridge that distance to make up that gap

A way I like to say it is it doesn’t do our thinking for us, it does our research for us so we can do our thinking better, and that’s true of us as end users and it’s true of advisors.

Source: PCMag



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Will Satya’s ‘Charlottesville email’ shape AI applications at Microsoft?


“You can’t paint what you ain’t.”

– Drew Struzan

Those words got to me 18 years ago during an interview I had with this esteemed artist. We were working on a project together, an interactive CD about his movie posters, several of which were classics by then, when the conversation wandered off the subject of art and we began to examine the importance of being true to one’s self.  

“Have you ever, in your classes or seminars talked much about the underlying core foundation principles of your life?” I asked Drew that day.

His answer in part went like this: “Whenever I talk, I’m asked to talk about my art, because that’s what they see, that’s what’s out front. But the power of the art comes out of the personality of the human being. Inevitably, you can’t paint what you ain’t.”

That conversation between us took place five days before Columbine, in April of 1999, when Pam and I lived in Denver and a friend of ours had children attending that school. That horrific event triggered a lot of value discussions and a lot of human actions, in response to it.

Flash-forward to Charlottesville. And an email, in response to it, that the CEO of a large tech company sent his employees yesterday, putting a stake in the ground about what his company stands for, and won’t stand for, during these “horrific” times.

“… At Microsoft, we strive to seek out differences, celebrate them and invite them in. As a leader, a key part of your role is creating a culture where every person can do their best work, which requires more than tolerance for diverse perspectives. Our growth mindset culture requires us to truly understand and share the feelings of another person. …”

If Satya Nadella’s email expresses the emerging personality at Microsoft, the power source from which it works, then we are cautiously optimistic about what this could do for socializing AI.

It will take this kind of foundation-building, going forward, as MS introduces more AI innovations, to diminish the inherent bias in deep learning approaches and the implicit bias in algorithms.

It will take this depth of awareness to shape the values of Human-AI collaboration, to protect the humans who use AI. Values that, “seek out differences, celebrate them and invite them in.”

It will require unwavering dedication to this goal. Because. You can’t paint what you ain’t.

Blogger, Phil Lawson
SocializingAI.com



Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Humans are born irrational, and that has made us better decision-makers

Facts on their own don’t tell you anything. It’s only paired with preferences, desires, with whatever gives you pleasure or pain, that can guide your behavior. Even if you knew the facts perfectly, that still doesn’t tell you anything about what you should do.”

Even if we were able to live life according to detailed calculations, doing so would put us at a massive disadvantage. This is because we live in a world of deep uncertainty, under which neat logic simply isn’t a good guide.

It’s well-established that data-based decisions doesn’t inoculate against irrationality or prejudice, but even if it was possible to create a perfectly rational decision-making system based on all past experience, this wouldn’t be a foolproof guide to the future.

Courageous acts and leaps of faith are often attempts to overcome great and seemingly insurmountable challenges. (It wouldn’t take much courage if it were easy to do.) But while courage may be irrational or hubristic, we wouldn’t have many great entrepreneurs or works of art without those with a somewhat illogical faith in their own abilities.

There are occasions where overly rational thinking would be highly inappropriate. Take finding a partner, for example. If you had the choice between a good-looking high-earner who your mother approves of, versus someone you love who makes you happy every time you speak to them—well, you’d be a fool not to follow your heart.

And even when feelings defy reason, it can be a good idea to go along with the emotional rollercoaster. After all, the world can be an entirely terrible place and, from a strictly logical perspective, optimism is somewhat irrational.

But it’s still useful. “It can be beneficial not to run around in the world and be depressed all the time,” says Gigerenzer.

Of course, no human is perfect, and there are downsides to our instincts. But, overall, we’re still far better suited to the real world than the most perfectly logical thinking machine.

We’re inescapably irrational, and far better thinkers as a result.

Source: Quartz

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Software is the future of healthcare “digital therapeutics” instead of a pill

vjiay-pandeWe’ll start to use “digital therapeutics” instead of getting a prescription to take a pill. Services that already exist — like behavioral therapies — might be able to scale better with the help of software, rather than be confined to in-person, brick-and-mortar locations.

Vijay Pande, a general partner at Andreessen Horowitz, runs the firm’s bio fund.

Source: Business Insider
Why an investor at Andreessen Horowitz thinks software is the future of healthcare
Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Sixty-two percent of organizations will be using artificial intelligence (AI) by 2018, says Narrative Science

AI growth chart 2016Artificial intelligence received $974m of funding as of June 2016 and this figure will only rise with the news that 2016 saw more AI patent applications than ever before.

This year’s funding is set to surpass 2015’s total and CB Insights suggests that 200 AI-focused companies have raised nearly $1.5 billion in equity funding.

AI-stats-by-sector

Artificial Intelligence statistics by sector

AI isn’t limited to the business sphere, in fact the personal robot market, including ‘care-bots’, could reach $17.4bn by 2020.

Care-bots could prove to be a fantastic solution as the world’s populations see an exponential rise in elderly people. Japan is leading the way with a third of government budget on robots devoted to the elderly.

Source: Raconteur: The rise of artificial intelligence in 6 charts

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Why Microsoft bought LinkedIn, in one word: Cortana

Know everything about your business contact before you even walk into the room.

JMicrosoft Linkedin 1eff Weiner, the chief executive of LinkedIn, said that his company envisions a so-called “Economic Graph,” a digital representation of every employee and their resume, a digital record of every job that’s available, as well as every job and even every digital skill necessary to win those jobs.

LinkedIn also owns Lynda.com, a training network where you can take classes to learn those skills. And, of course, there’s the LinkedIn news feed, where you can keep tabs on your coworkers from a social perspective, as well.

Buying LinkedIn brings those two graphs together and gives Microsoft more data to feed into its machine learning and business intelligence processes. “If you connect these two graphs, this is where the magic happens, where digital work is concerned,” Microsoft chief executive Satya Nadella said during a conference call.

Microsoft Linkedin 2

Source: PC World

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Machines can never be as wise as human beings – Jack Ma #AI

zuckerger and jack ma

“I think machines will be stronger than human beings, machines will be smarter than human beings, but machines can never be as wise as human beings.”

The wisdom, soul and heart are what human beings have. A machine can never enjoy the feelings of success, friendship and love. We should use the machine in an innovative way to solve human problems.” – Jack Ma, Founder of Alibaba Group, China’s largest online marketplace

Mark Zuckerberg said AI technology could prove useful in areas such as medicine and hands-free driving, but it was hard to teach computers common sense. Humans had the ability to learn and apply that knowledge to problem-solving, but computers could not do that.

AI won’t outstrip mankind that soon – MZ

Source: South China Morning Post

 

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Will human therapists go the way of the Dodo?

ai therapist

An increasing number of patients are using technology for a quick fix. Photographed by Mikael Jansson, Vogue, March 2016

PL  – So, here’s an informative piece on a person’s experience using an on-demand interactive video therapist, as compared to her human therapist. In Vogue Magazine, no less. A sign this is quickly becoming trendy. But is it effective?

In the first paragraph, the author of the article identifies the limitations of her digital therapist:

“I wish I could ask (she eventually named her digital therapist Raph) to consider making an exception, but he and I aren’t in the habit of discussing my problems

But the author also recognizes the unique value of the digital therapist as she reflects on past sessions with her human therapist:

“I saw an in-the-flesh therapist last year. Alice. She had a spot-on sense for when to probe and when to pass the tissues. I adored her. But I am perennially juggling numerous assignments, and committing to a regular weekly appointment is nearly impossible.”

Later on, when the author was faced with another crisis, she returned to her human therapist and this was her observation of that experience:

“she doesn’t offer advice or strategies so much as sympathy and support—comforting but short-lived. By evening I’m as worried as ever.”

On the other hand, this is her view of her digital therapist:

“Raph had actually come to the rescue in unexpected ways. His pragmatic MO is better suited to how I live now—protective of my time, enmeshed with technology. A few months after I first “met” Raph, my anxiety has significantly dropped”

This, of course, was a story written by a successful educated woman, working with an interactive video, who had experiences with a human therapist to draw upon for reference.

What about the effectiveness of a digital therapist for a more diverse population with social, economic and cultural differences?

It has already been shown that, done right, this kind of tech has great potential. In fact, as a more affordable option, it may do the most good for the wider population.

The ultimate goal for tech designers should be to create a more personalized experience. Instant and intimate. Tech that gets to know the person and their situation, individually. Available any time. Tech that can access additional electronic resources for the person in real-time, such as the above mentioned interactive video.  

But first, tech designers must address a core problem with mindset. They code for a rational world while therapists deal with irrational human beings. As a group, they believe they are working to create an omniscient intelligence that does not need to interact with the human to know the human. They believe it can do this by reading the human’s emails, watching their searches, where they go, what they buy, who they connect with, what they share, etc. As if that’s all humans are about. As if they can be statistically profiled and treated to predetermined multi-stepped programs.

This is an incompatible approach for humans and the human experience. Tech is a reflection of the perceptions of its coders. And coders, like doctors, have their limitations.

In her recent book, Just Medicine, Dayna Bowen Matthew highlights research that shows 83,570 minorities die each year from implicit bias from well-meaning doctors. This should be a cautionary warning. Digital therapists could soon have a reach and impact that far exceeds well-trained human doctors and therapists. A poor foundational design for AI could have devastating consequences for humans.

A wildcard was recently introduced with Google’s AlphaGo, an artificial intelligence that plays the board game Go. In a historic Go match between Lee Sedol, one of the world’s top players, AlphaGo won the match four out of five games. This was a surprising development. Many thought this level of achievement was 10 years out.  

The point: Artificial intelligence is progressing at an extraordinary pace, unexpected by most all the experts. It’s too exciting, too easy, too convenient. To say nothing of its potential to be “free,” when tech giants fully grasp the unparalleled personal data they can collect. The Jeanie (or Joker) is out of the bottle. And digital coaches are emerging. Capable of drawing upon and sorting vast amounts of digital data.

Meanwhile, the medical and behavioral fields are going too slow. Way too slow. 

They are losing ground (most likely have already lost) control of their future by vainly believing that a cache of PhDs, research and accreditations, CBT and other treatment protocols, government regulations and HIPPA, is beyond the challenge and reach of tech giants. Soon, very soon, therapists that deal in non-critical non-crisis issues could be bypassed when someone like Apple hangs up its ‘coaching’ shingle: “Siri is In.”

The most important breakthrough of all will be the seamless integration of a digital coach with human therapists, accessible upon immediate request, in collaborative and complementary roles.

This combined effort could vastly extend the reach and impact of all therapies for the sake of all human beings.

Source: Vogue

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Is an Affair in Virtual Reality Still Cheating?

virtual reality sexI hadn’t touched another woman in an intimate way since before getting married six years ago. Then, in the most peculiar circumstances, I was doing it. I was caressing a young woman’s hands. I remember thinking as I was doing it: I don’t even know this person’s name.

After 30 seconds, the experience became too much and I stopped. I ripped off my Oculus Rift headset and stood up from the chair I was sitting on, stunned. It was a powerful experience, and I left convinced that virtual reality was not only the future of sex, but also the future of infidelity.

Whatever happens, the old rules of fidelity are bound to change dramatically. Not because people are more open or closed-minded, but because evolving technology is about the force the issues into our brains with tantalizing 1s and 0s.

Source: Motherboard

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail

Colin Angle
on “What is human?”

AI Quotes

Colin Angle, CEO iRobot

Colin Angle“Long before we have a robot uprising, we’re going to deal with much more interesting problems. This idea that we’re going to build a robot that has human cognition and appreciation for morals and values, that’s super-hard stuff

“The more important question is “What is human?” ” 

Source: Business Insider, Dylan Love, June 1, 2014
A Q&A With iRobot’s Colin Angle, The CEO Of The Only Consumer Robotics Company That Matters

Facebooktwittergoogle_plusredditpinterestlinkedinmailFacebooktwittergoogle_plusredditpinterestlinkedinmail