The biggest misconception about artificial intelligence is that we have it

Artificial intelligence still has a long way to go, says IBM’s head of AI, Alexander Gray

“The biggest misconception is that we have it. I wouldn’t even call it AI. I would say it’s right to call the field AI, we’re pursuing AI, but we don’t have it yet,” he said.

Gray says at present, humans are still sorely needed.

“No matter how you look at it, there’s a lot of handcrafting [involved]. We have ever increasingly powerful tools but we haven’t made the leap yet,”

According to Gray, we’re only seeing “human-level performance” for narrowly defined tasks. Most machine learning-based algorithms have to analyze thousands of examples, and haven’t achieved the idea of one-shot or few-shot learnings.

Once you go slightly beyond that data set and it looks different. Humans win. There will always be things that humans can do that AI can’t do. You still need human data scientists to do the data preparation part — lots of blood and guts stuff that requires open domain knowledge about the world,” he said.

Artificial intelligence is perhaps the most hyped yet misunderstood field of study today.

Gray said while we may not be experiencing the full effects of AI yet, it’s going to happen a lot faster than we think — and that’s where the fear comes in.

“Everything moves on an exponential curve. I really do believe that we will start to see entire classes of jobs getting impacted.

My fear is that we won’t have the social structures and agreements on what we should do to keep pace with that. I’m not sure if that makes me optimistic or pessimistic.”

Source: Yahoo



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The idea was to help you and I make better decisions amid cognitive overload

IBM Chairman, President, and Chief Executive Officer Ginni Rometty. PHOTOGRAPHER: STEPHANIE SINCLAIR FOR BLOOMBERG BUSINESSWEEK

If I considered the initials AI, I would have preferred augmented intelligence.

It’s the idea that each of us are going to need help on all important decisions.

A study said on average that a third of your decisions are really great decisions, a third are not optimal, and a third are just wrong. We’ve estimated the market is $2 billion for tools to make better decisions.

That’s what led us all to really calling it cognitive

“Look, we really think this is about man and machine, not man vs. machine. This is an era—really, an era that will play out for decades in front of us.”

We set out to build an AI platform for business.

AI would be vertical. You would train it to know medicine. You would train it to know underwriting of insurance. You would train it to know financial crimes. Train it to know oncology. Train it to know weather. And it isn’t just about billions of data points. In the regulatory world, there aren’t billions of data points. You need to train and interpret something with small amounts of data.

This is really another key point about professional AI. Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.”

What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

It’s our responsibility if we build this stuff to guide it safely into the world.

Source: Bloomberg



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

IBM Watson CTO on Why Augmented Intelligence Beats AI

If you look at almost every other tool that has ever been created, our tools tend to be most valuable when they’re amplifying us, when they’re extending our reach, when they’re increasing our strength, when they’re allowing us to do things that we can’t do by ourselves as human beings. That’s really the way that we need to be thinking about AI as well, and to the extent that we actually call it augmented intelligence, not artificial intelligence.

Some time ago we realized that this thing called cognitive computing was really bigger than us, it was bigger than IBM, it was bigger than any one vendor in the industry, it was bigger than any of the one or two different solution areas that we were going to be focused on, and we had to open it up, which is when we shifted from focusing on solutions to really dealing with more of a platform of services, where each service really is individually focused on a different part of the problem space.

what we’re talking about now are a set of services, each of which do something very specific, each of which are trying to deal with a different part of our human experience, and with the idea that anybody building an application, anybody that wants to solve a social or consumer or business problem can do that by taking our services, then composing that into an application.

If the doctor can now make decisions that are more informed, that are based on real evidence, that are supported by the latest facts in science, that are more tailored and specific to the individual patient, it allows them to actually do their job better. For radiologists, it may allow them to see things in the image that they might otherwise miss or get overwhelmed by. It’s not about replacing them. It’s about helping them do their job better.

That’s really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would’ve been by theirselves. That’s really the way we think about it. That’s how we’re evolving the technology. That’s where the economic utility is going to be.

There are lots of things that we as human beings are good at. There’s also a lot of things that we’re not very good, and that’s I think where cognitive computing really starts to make a huge difference, is when it’s able to bridge that distance to make up that gap

A way I like to say it is it doesn’t do our thinking for us, it does our research for us so we can do our thinking better, and that’s true of us as end users and it’s true of advisors.

Source: PCMag



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail