The idea was to help you and I make better decisions amid cognitive overload

IBM Chairman, President, and Chief Executive Officer Ginni Rometty. PHOTOGRAPHER: STEPHANIE SINCLAIR FOR BLOOMBERG BUSINESSWEEK

If I considered the initials AI, I would have preferred augmented intelligence.

It’s the idea that each of us are going to need help on all important decisions.

A study said on average that a third of your decisions are really great decisions, a third are not optimal, and a third are just wrong. We’ve estimated the market is $2 billion for tools to make better decisions.

That’s what led us all to really calling it cognitive

“Look, we really think this is about man and machine, not man vs. machine. This is an era—really, an era that will play out for decades in front of us.”

We set out to build an AI platform for business.

AI would be vertical. You would train it to know medicine. You would train it to know underwriting of insurance. You would train it to know financial crimes. Train it to know oncology. Train it to know weather. And it isn’t just about billions of data points. In the regulatory world, there aren’t billions of data points. You need to train and interpret something with small amounts of data.

This is really another key point about professional AI. Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.”

What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

It’s our responsibility if we build this stuff to guide it safely into the world.

Source: Bloomberg



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

IBM Watson CTO on Why Augmented Intelligence Beats AI

If you look at almost every other tool that has ever been created, our tools tend to be most valuable when they’re amplifying us, when they’re extending our reach, when they’re increasing our strength, when they’re allowing us to do things that we can’t do by ourselves as human beings. That’s really the way that we need to be thinking about AI as well, and to the extent that we actually call it augmented intelligence, not artificial intelligence.

Some time ago we realized that this thing called cognitive computing was really bigger than us, it was bigger than IBM, it was bigger than any one vendor in the industry, it was bigger than any of the one or two different solution areas that we were going to be focused on, and we had to open it up, which is when we shifted from focusing on solutions to really dealing with more of a platform of services, where each service really is individually focused on a different part of the problem space.

what we’re talking about now are a set of services, each of which do something very specific, each of which are trying to deal with a different part of our human experience, and with the idea that anybody building an application, anybody that wants to solve a social or consumer or business problem can do that by taking our services, then composing that into an application.

If the doctor can now make decisions that are more informed, that are based on real evidence, that are supported by the latest facts in science, that are more tailored and specific to the individual patient, it allows them to actually do their job better. For radiologists, it may allow them to see things in the image that they might otherwise miss or get overwhelmed by. It’s not about replacing them. It’s about helping them do their job better.

That’s really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would’ve been by theirselves. That’s really the way we think about it. That’s how we’re evolving the technology. That’s where the economic utility is going to be.

There are lots of things that we as human beings are good at. There’s also a lot of things that we’re not very good, and that’s I think where cognitive computing really starts to make a huge difference, is when it’s able to bridge that distance to make up that gap

A way I like to say it is it doesn’t do our thinking for us, it does our research for us so we can do our thinking better, and that’s true of us as end users and it’s true of advisors.

Source: PCMag



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

80% of what human physicians currently do will soon be done instead by technology, allowing physicians to

Data-driven AI technologies are well suited to address chronic inefficiencies in health markets, potentially lowering costs by hundreds of billions of dollars, while simultaneously reducing the time burden on physicians.

These technologies can be leveraged to capture the massive volume of data that describes a patient’s past and present state, project potential future states, analyze that data in real time, assist in reasoning about the best way to achieve patient and physician goals, and provide both patient and physician constant real-time support. Only AI can fulfill such a mission. There is no other solution.

Technologist and investor Vinod Khosla posited that 80 percent of what human physicians currently do will soon be done instead by technology, allowing physicians to focus their time on the really important elements of patient physician interaction.

Within five years, the healthcare sector has the potential to undergo a complete metamorphosis courtesy of breakthrough AI technologies. Here are just a few examples:

1. Physicians will practice with AI virtual assistants (using, for example, software tools similar to Apple’s Siri, but specialized to the specific healthcare application).

2. Physicians with AI virtual assistants will be able to treat 5X – 10X as many patients with chronic illnesses as they do today, with better outcomes than in the past.

Patients will have a constant “friend” providing a digital health conscience to advise, support, and even encourage them to make healthy choices and pursue a healthy lifestyle.

3. AI virtual assistants will support both patients and healthy individuals in health maintenance with ongoing and real-time intelligent advice.

Our greatest opportunity for AI-enhancement in the sector is keeping people healthy, rather than waiting to treat them when they are sick. AI virtual assistants will be able to acquire deep knowledge of diet, exercise, medications, emotional and mental state, and more.

4. Medical devices previously only available in hospitals will be available in the home, enabling much more precise and timely monitoring and leading to a healthier population.

5. Affordable new tools for diagnosis and treatment of illnesses will emerge based on data collected from extant and widely adopted digital devices such as smartphones.

6. Robotics and in-home AI systems will assist patients with independent living.

But don’t be misled — the best metaphor is that they are learning like humans learn and that they are in their infancy, just starting to crawl. Healthcare AI virtual assistants will soon be able to walk, and then run.

Many of today’s familiar AI engines, personified in Siri, Cortana, Alexa, Google Assistant or any of the hundreds of “intelligent chatbots,” are still immature and their capabilities are highly limited. Within the next few years they will be conversational, they will learn from the user, they will maintain context, and they will provide proactive assistance, just to name a few of their emerging capabilities.

And with these capabilities applied in the health sector, they will enable us to keep millions of citizens healthier, give physicians the support and time they need to practice, and save trillions of dollars in healthcare costs. Welcome to the age of AI.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

By 2020 the average person will have more conversations with bots than with their spouse

Gartner Predicts a Virtual World of Exponential Change

Mr. Plummer (VP & Fellow at Gartner) noted that disruption has moved from an infrequent inconvenience to a consistent stream of change that is redefining markets and entire industries.

“The practical approach is to recognize disruption, prioritize the impacts of that disruption, and then react to it to capture value,” 

Gartner’s Top 10 Predictions for 2017 and Beyond

 

#4. Algorithms at Work

By 2020, algorithms will positively alter the behavior of billions of global workers.
Employees, already familiar with behavior influencing through contextual algorithms on consumer sites such as Amazon, will be influenced by an emerging set of “persuasive technologies” that leverage big data from myriad sources, mobile, IoT devices and deep analysis.

JPMorgan Chase uses an algorithm to forecast and positively influence the behavior of thousands of investment bank and asset management employees to minimize mistaken or ethically wrong decisions.

Richard Branson’s Virgin Atlantic uses influence algorithms to guide pilots to use less fuel.

By year end 2017, watch for at least one commercial organization to report significant increase in profit margins because it used algorithms to positively alter its employees’ behaviors.

Source: Gartner

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We’re so unprepared for the robot apocalypse

Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007

It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.

In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.

The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. 

every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town

“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.

This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back

one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.

The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune

.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.

Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.

How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.

some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers

The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.

automation amplified opportunities for people with advanced skills and talents

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Is Your Doctor Stumped? There’s a Chatbot for That

Doctors have created a chatbot to revolutionize communication within hospitals using artificial intelligence … basically a cyber-radiologist in app form, can quickly and accurately provide specialized information to non-radiologists. And, like all good A.I., it’s constantly learning.

Traditionally, interdepartmental communication in hospitals is a hassle. A clinician’s assistant or nurse practitioner with a radiology question would need to get a specialist on the phone, which can take time and risks miscommunication. But using the app, non-radiologists can plug in common technical questions and receive an accurate response instantly.

“Say a patient has a creatinine [lab test to see how well the kidneys are working]” co-author and application programmer Kevin Seals tells Inverse. “You send a message, like you’re texting with a human radiologist. ‘My patient is a 5.6, can they get a CT scan with contrast?’ A lot of this is pretty routine questions that are easily automated with software, but there’s no good tool for doing that now.”

In about a month, the team plans to make the chatbot available to everyone at UCLA’s Ronald Reagan Medical Center, see how that plays out, and scale up from there. Your doctor may never be stumped again.

Source: Inverse

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Burger-flipping robot could spell the end of teen employment

The AI-driven robot ‘Flippy,’ by Miso Robotics, is marketed as a kitchen assistant, rather than a replacement for professionally-trained teens that ponder the meaning of life — or what their crush looks like naked — while awaiting a kitchen timer’s signal that it’s time to flip the meat.

Flippy features a number of different sensors and cameras to identify food objects on the grill. It knows, for example, that burgers and chicken-like patties cook for a different duration. Once done, the machine expertly lifts the burger off the grill and uses its on-board technology to place it gently on a perfectly-browned bun.

The robot doesn’t just work the grill like a master hibachi chef, either. Flippy is capable of deep frying, chopping vegetables, and even plating dishes.

Source: TNW

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

JPMorgan software does in seconds what took lawyers 360,000 hours

At JPMorgan, a learning machine is parsing financial deals that once kept legal teams busy for thousands of hours.

The program, called COIN, for Contract Intelligence, does the mind-numbing job of interpreting commercial-loan agreements that, until the project went online in June, consumed 360,000 hours of lawyers’ time annually. The software reviews documents in seconds, is less error-prone and never asks for vacation.

COIN is just the start for the biggest U.S. bank. The firm recently set up technology hubs for teams specialising in big data, robotics and cloud infrastructure to find new sources of revenue, while reducing expenses and risks.

The push to automate mundane tasks and create new tools for bankers and clients — a growing part of the firm’s $9.6 billion technology budget.

Behind the strategy, overseen by Chief Operating Officer Matt Zames and Chief Information Officer Dana Deasy, is an undercurrent of anxiety:

though JPMorgan emerged from the financial crisis as one of few big winners, its dominance is at risk unless it aggressively pursues new technologies, according to interviews with a half-dozen bank executives.

Source: Independent

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Wikipedia bots act more like humans than expected

‘Benevolent bots’ or software robots designed to improve articles on Wikipedia sometimes have online ‘fights’ over content that can continue for years, say scientists who warn that artificial intelligence systems may behave more like humans than expected.

They found that bots interacted with one another, whether or not this was by design, and it led to unpredictable consequences.

Researchers said that bots are more like humans than you might expect. Bots appear to behave differently in culturally distinct online environments.

The findings are a warning to those using artificial intelligence for building autonomous vehicles, cyber security systems or for managing social media.

We may have to devote more attention to bots’ diverse social life and their different cultures, researchers said.

The research found that although the online world has become an ecosystem of bots, our knowledge of how they interact with each other is still rather poor.

Although bots are automatons that do not have the capacity for emotions, bot to bot interactions are unpredictable and act in distinctive ways.

Researchers found that German editions of Wikipedia had fewest conflicts between bots, with each undoing another’s edits 24 times, on average, over ten years.

This shows relative efficiency, when compared with bots on the Portuguese Wikipedia edition, which undid another bot’s edits 185 times, on average, over ten years, researchers said.

Bots on English Wikipedia undid another bot’s work 105 times, on average, over ten years, three times the rate of human reverts, they said.

The findings show that even simple autonomous algorithms can produce complex interactions that result in unintended consequences – ‘sterile fights’ that may continue for years, or reach deadlock in some cases.

“We find that bots behave differently in different cultural environments and their conflicts are also very different to the ones between human editors,” said Milena Tsvetkova, from the Oxford Internet Institute.

“This has implications not only for how we design artificial agents but also for how we study them. We need more research into the sociology of bots,” said Tsvetkova.

Source: The Statesman

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We are evolving to an AI first world

“We are at a seminal moment in computing … we are evolving from a mobile first to an AI first world,” says Sundar Pichai.

“Our goal is to build a personal Google for each and every user … We want to build each user, his or her own individual Google.”

Watch 4 mins of Sundar Pichai’s key comments about the role of AI in our lives and how a personal Google for each of us will work. 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google teaches robots to learn from each other

article_robolearning-970x350

Google has a plan to speed up robotic learning, and it involves getting robots to share their experiences – via the cloud – and collectively improve their capabilities – via deep learning.

Google researchers decided to combine two recent technology advances. The first is cloud robotics, a concept that envisions robots sharing data and skills with each other through an online repository. The other is machine learning, and in particular, the application of deep neural networks to let robots learn for themselves.

They got the robots to pool their experiences to “build a common model of the skill” that, as the researches explain, was better and faster than what they could have achieved on their own.

As robots begin to master the art of learning it’s inevitable that one day they’ll be able to acquire new skills instantly at much, much faster rates than humans have ever been able to.

Source: Global Futurist

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google’s AI Plans Are A Privacy Nightmare

googles-ai-plans-are-a-privacy-nightmareGoogle is betting that people care more about convenience and ease than they do about a seemingly oblique notion of privacy, and it is increasingly correct in that assumption.

Google’s new assistant, which debuted in the company’s new messaging app Allo, works like this: Simply ask the assistant a question about the weather, nearby restaurants, or for directions, and it responds with detailed information right there in the chat interface.

Because Google’s assistant recommends things that are innately personal to you, like where to eat tonight or how to get from point A to B, it is amassing a huge collection of your most personal thoughts, visited places, and preferences  In order for the AI to “learn” this means it will have to collect and analyze as much data about you as possible in order to serve you more accurate recommendations, suggestions, and data.

In order for artificial intelligence to function, your messages have to be unencrypted.

These new assistants are really cool, and the reality is that tons of people will probably use them and enjoy the experience. But at the end of the day, we’re sacrificing the security and privacy of our data so that Google can develop what will eventually become a new revenue stream. Lest we forget: Google and Facebook have a responsibility to investors, and an assistant that offers up a sponsored result when you ask it what to grab for dinner tonight could be a huge moneymaker.

Source: Gizmodo

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

UC Berkeley launches Center for Human-Compatible Artificial Intelligence

robotknot750The primary focus of the new center is to ensure that AI systems are “beneficial to humans” says UC Berkeley AI expert Stuart Russell.

The center will work on ways to guarantee that the most sophisticated AI systems of the future, which may be entrusted with control of critical infrastructure and may provide essential services to billions of people, will act in a manner that is aligned with human values.

“In the process of figuring out what values robots should optimize, we are making explicit the idealization of ourselves as humans. As we envision AI aligned with human values, that process might cause us to think more about how we ourselves really should behave, and we might learn that we have more in common with people of other cultures than we think.”

Source: Berkeley.edu

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why Microsoft bought LinkedIn, in one word: Cortana

Know everything about your business contact before you even walk into the room.

JMicrosoft Linkedin 1eff Weiner, the chief executive of LinkedIn, said that his company envisions a so-called “Economic Graph,” a digital representation of every employee and their resume, a digital record of every job that’s available, as well as every job and even every digital skill necessary to win those jobs.

LinkedIn also owns Lynda.com, a training network where you can take classes to learn those skills. And, of course, there’s the LinkedIn news feed, where you can keep tabs on your coworkers from a social perspective, as well.

Buying LinkedIn brings those two graphs together and gives Microsoft more data to feed into its machine learning and business intelligence processes. “If you connect these two graphs, this is where the magic happens, where digital work is concerned,” Microsoft chief executive Satya Nadella said during a conference call.

Microsoft Linkedin 2

Source: PC World

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Obama – robots taking over jobs that pay less than $20 an hour

obama on jobsBuried deep in President Obama’s February economic report to Congress was a rather grave section on the future of robotics in the workforce.

After much back and forth on the ways robots have eliminated or displaced workers in the past, the report introduced a critical study conducted this year by the White House’s Council of Economic Advisers (CEA).

The study examined the chances automation could threaten people’s jobs based on how much money they make: either less than $20 an hour, between $20 and $40 an hour, or more than $40.

The results showed a 0.83 median probability of automation replacing the lowest-paid workers — those manning the deep fryers, call centers, and supermarket cash registers — while the other two wage classes had 0.31 and 0.04 chances of getting automated, respectively.

In other words, 62% of American jobs may be at risk.

white house AI job lossSource: TechInsider

ericschmidt2Meanwhile – from Alphabet (Google) chairman Eric Schmidt

There’s no question that as [AI] becomes more pervasive, people doing routine, repetitive tasks will be at risk,” Schmidt says.

I understand the economic arguments, but this technology benefits everyone on the planet, from the rich to the poor, the educated to uneducated, high IQ to low IQ, every conceivable human being. It genuinely makes us all smarter, so this is a natural next step.”

Source: Financial Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI will free humans to do other things

AI free humans do other thingsThe ability of AI to automate much of what we do, and its potential to destroy humanity, are two very different things. But according to Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future, they’re often conflated. It’s fine to think about the far-future implications of AI, but only if it doesn’t distract us from the issues we’re likely to face over the next few decades. Chief among them is mass automation.

There’s no question that artificial intelligence is poised to uproot and replace many existing jobs, from factory work to the upper echelonswhite collar work. Some experts predict that half of all jobs in the US are vulnerable to automation in the near future.

But that doesn’t mean we won’t be able to deal with the disruption. A strong case can be made that offloading much of our work, both physical and mental, is a laudable, quasi-utopian goal for our species.

In all likelihood, artificial intelligence will produce new ways of creating wealth, while freeing humans to do other things. And advances in AI will be accompanied by advances in other areas, especially manufacturing. In the future, it will become easier, and not harder, to meet our basic needs.

Source: Gizmodo

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence: Toward a technology-powered, human-led AI revolution

AI gartner2Research conducted among 9,000 young people between the ages of 16 and 25 in nine industrialised and developing markets – Australia, Brazil, China, France, Germany, Great Britain, India, South Africa and the United States – showed that a striking 40 per cent think that a machine – some kind of artificial intelligence – will be able to fully do their job in the next decade.

Young people today are keenly aware that the impact of technology will be central to the way their careers and lives will progress and differ from those of previous generations.

In its “Top strategic predictions for 2016 and beyond,” Gartner expects that by 2018, 20 per cent of all business content will be authored by machines and 50 per cent of the fastest-growing companies will have fewer employees than instances of smart machines. This is AI in action. Automated systems can have measurable, positive impacts on both our environment and our social responsibilities, giving us the room to explore, research and create new techniques to further enrich our lives. It is a radical revolution in our time.

The message from the next generation seems to be “take us on the journey.” But it is one which technology leaders need to lead. That means ensuring that as we use technology to remove the mundane, we also use it to amplify the creativity and inquisitive nature only humans are capable of. We need the journey of AI to be a human- led journey.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google’s new AI will reply to your emails so you don’t have to

People who have the Inbox email program on their iPhones or Android devices will soon have a new option when it comes to replying to emails. Instead of coming up with their own responses on their mobile devices, they’ll get to choose between three options created by a neural network built by Google researchers.

Google claims it has built an AI that can read incoming emails, understand them, and generate a short, appropriate response that the recipient can then edit or send with just a click.

google email ai2

In the case of Smart Reply, what Google has done is combine several systems to build a neural network that can read your email, parse what the words in the email mean, and then not only generate a response, but generate three different responses. This is more than just building out rules for common words that fall in an email. This is truly teaching a computer to understand the text of an email. It uses the type of neural networks found in natural language processing to understand what a person means and also generate a reply.

Another bizarre feature of our early prototype was its propensity to respond with “I love you” to seemingly anything. As adorable as this sounds, it wasn’t really what we were hoping for. Some analysis revealed that the system was doing exactly what we’d trained it to do, generate likely responses—and it turns out that responses like “Thanks”, “Sounds good”, and “I love you” are super common—so the system would lean on them as a safe bet if it was unsure. Normalizing the likelihood of a candidate reply by some measure of that response’s prior probability forced the model to predict responses that were not just highly likely, but also had high affinity to the original message. This made for a less lovey, but far more useful, email assistant.

Source: Fortune

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail