wait … am I being manipulated on this topic by an Amazon-owned AI engine?

Image Credit: chombosan/Shutterstock

The other night, my nine-year-old daughter (who is, of course, the most tech-savvy person in the house), introduced me to a new Amazon Alexa skill.

Alexa, start a conversation,” she said.

We were immediately drawn into an experience with new bot, or, as the technologists would say, “conversational user interface” (CUI).  It was, we were told, the recent winner in an Amazon AI competition from the University of Washington.

At first, the experience was fun, but when we chose to explore a technology topic, the bot responded, “have you heard of Net Neutrality?What we experienced thereafter was slightly discomforting.

The bot seemingly innocuously cited a number of articles that she “had read on the web” about the FCC, Ajit Pai, and the issue of net neutrality. But here’s the thing: All four articles she recommended had a distinct and clear anti-Ajit Pai bias.

Now, the topic of Net Neutrality is a heated one and many smart people make valid points on both sides, including Fred Wilson and Ben Thompson. That is how it should be.

But the experience of the Alexa CUI should give you pause, as it did me.

To someone with limited familiarity with the topic of net neutrality, the voice seemed soothing and the information unbiased. But if you have a familiarity with the topic, you might start to wonder, “wait … am I being manipulated on this topic by an Amazon-owned AI engine to help the company achieve its own policy objectives?”

The experience highlights some of the risks of the AI-powered future into which we are hurtling at warp speed.

If you are going to trust your decision-making to a centralized AI source, you need to have 100 percent confidence in:

  • The integrity and security of the data (are the inputs accurate and reliable, and can they be manipulated or stolen?)
  • The machine learning algorithms that inform the AI (are they prone to excessive error or bias, and can they be inspected?)
  • The AI’s interface (does it reliably represent the output of the AI and effectively capture new data?)

In a centralized, closed model of AI, you are asked to implicitly trust in each layer without knowing what is going on behind the curtains.

Welcome to the world of Blockchain+AI.

3 blockchain projects tackling decentralized data and AI (click here to read the blockchain projects)

Source: Venture Beat



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Growing #AI Emotion Reading Tech Challenge

PL – The challenge of an AI using Emotion Reading Tech just got dramatically more difficult.

A new study identifies 27 categories of emotion and shows how they blend together in our everyday experience.

Psychology once assumed that most human emotions fall within the universal categories of happiness, sadness, anger, surprise, fear, and disgust. But a new study from Greater Good Science Center faculty director Dacher Keltner suggests that there are at least 27 distinct emotions—and they are intimately connected with each other.

“We found that 27 distinct dimensions, not six, were necessary to account for the way hundreds of people reliably reported feeling in response to each video”

Moreover, in contrast to the notion that each emotional state is an island, the study found that “there are smooth gradients of emotion between, say, awe and peacefulness, horror and sadness, and amusement and adoration,” Keltner said.

“We don’t get finite clusters of emotions in the map because everything is interconnected,” said study lead author Alan Cowen, a doctoral student in neuroscience at UC Berkeley.

“Emotional experiences are so much richer and more nuanced than previously thought.”

Source: Mindful

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

80% of what human physicians currently do will soon be done instead by technology, allowing physicians to

Data-driven AI technologies are well suited to address chronic inefficiencies in health markets, potentially lowering costs by hundreds of billions of dollars, while simultaneously reducing the time burden on physicians.

These technologies can be leveraged to capture the massive volume of data that describes a patient’s past and present state, project potential future states, analyze that data in real time, assist in reasoning about the best way to achieve patient and physician goals, and provide both patient and physician constant real-time support. Only AI can fulfill such a mission. There is no other solution.

Technologist and investor Vinod Khosla posited that 80 percent of what human physicians currently do will soon be done instead by technology, allowing physicians to focus their time on the really important elements of patient physician interaction.

Within five years, the healthcare sector has the potential to undergo a complete metamorphosis courtesy of breakthrough AI technologies. Here are just a few examples:

1. Physicians will practice with AI virtual assistants (using, for example, software tools similar to Apple’s Siri, but specialized to the specific healthcare application).

2. Physicians with AI virtual assistants will be able to treat 5X – 10X as many patients with chronic illnesses as they do today, with better outcomes than in the past.

Patients will have a constant “friend” providing a digital health conscience to advise, support, and even encourage them to make healthy choices and pursue a healthy lifestyle.

3. AI virtual assistants will support both patients and healthy individuals in health maintenance with ongoing and real-time intelligent advice.

Our greatest opportunity for AI-enhancement in the sector is keeping people healthy, rather than waiting to treat them when they are sick. AI virtual assistants will be able to acquire deep knowledge of diet, exercise, medications, emotional and mental state, and more.

4. Medical devices previously only available in hospitals will be available in the home, enabling much more precise and timely monitoring and leading to a healthier population.

5. Affordable new tools for diagnosis and treatment of illnesses will emerge based on data collected from extant and widely adopted digital devices such as smartphones.

6. Robotics and in-home AI systems will assist patients with independent living.

But don’t be misled — the best metaphor is that they are learning like humans learn and that they are in their infancy, just starting to crawl. Healthcare AI virtual assistants will soon be able to walk, and then run.

Many of today’s familiar AI engines, personified in Siri, Cortana, Alexa, Google Assistant or any of the hundreds of “intelligent chatbots,” are still immature and their capabilities are highly limited. Within the next few years they will be conversational, they will learn from the user, they will maintain context, and they will provide proactive assistance, just to name a few of their emerging capabilities.

And with these capabilities applied in the health sector, they will enable us to keep millions of citizens healthier, give physicians the support and time they need to practice, and save trillions of dollars in healthcare costs. Welcome to the age of AI.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Rock Teases Surprise Movie With Siri as co-star #AI

Johnson took to Instagram to announce what seems to be a film project with Apple entitled Dominate The Day.

“I partnered with Apple to make the biggest, coolest, sexiest, craziest, dopest, most over the top, funnest (is that even a word?) movie ever made,” Johnson wrote in an Instagram caption showing a poster for the upcoming project. “And I have the greatest co-star of all time, Siri. I make movies for the world to enjoy and we also made this one to motivate you to get out there and get the job done. I want you to watch it, have fun with it and then go live it.”

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Inside Microsoft’s Artificial Intelligence Comeback

Yoshua Bengio

[Yoshua Bengio, one of the three intellects who shaped the deep learning that now dominates artificial intelligence, has never been one to take sides. But Bengio has recently chosen to sign on with Microsoft. In this WIRED article he explains why.]

“We don’t want one or two companies, which I will not name, to be the only big players in town for AI,” he says, raising his eyebrows to indicate that we both know which companies he means. One eyebrow is in Menlo Park; the other is in Mountain View. “It’s not good for the community. It’s not good for people in general.”

That’s why Bengio has recently chosen to forego his neutrality, signing on with Microsoft.

Yes, Microsoft. His bet is that the former kingdom of Windows alone has the capability to establish itself as AI’s third giant. It’s a company that has the resources, the data, the talent, and—most critically—the vision and culture to not only realize the spoils of the science, but also push the field forward.

Just as the internet disrupted every existing business model and forced a re-ordering of industry that is just now playing out, artificial intelligence will require us to imagine how computing works all over again.

In this new landscape, computing is ambient, accessible, and everywhere around us. To draw from it, we need a guide—a smart conversationalist who can, in plain written or spoken form, help us navigate this new super-powered existence. Microsoft calls it Cortana.

Because Cortana comes installed with Windows, it has 145 million monthly active users, according to the company. That’s considerably more than Amazon’s Alexa, for example, which can be heard on fewer than 10 million Echoes. But unlike Alexa, which primarily responds to voice, Cortana also responds to text and is embedded in products that many of us already have. Anyone who has plugged a query into the search box at the top of the toolbar in Windows has used Cortana.

Eric Horvitz wants Microsoft to be more than simply a place where research is done. He wants Microsoft Research to be known as a place where you can study the societal and social influences of the technology.

This will be increasingly important as Cortana strives to become, to the next computing paradigm, what your smartphone is today: the front door for all of your computing needs. Microsoft thinks of it as an agent that has all your personal information and can interact on your behalf with other agents.

If Cortana is the guide, then chatbots are Microsoft’s fixers. They are tiny snippets of AI-infused software that are designed to automate one-off tasks you used to do yourself, like making a dinner reservation or completing a banking transaction.

Emma Williams, Marcus Ash, and Lili Cheng

So far, North American teens appear to like chatbot friends every bit as much as Chinese teens, according to the data. On average, they spend 10 hours talking back and forth with Zo. As Zo advises its adolescent users on crushes and commiserates about pain-in-the-ass parents, she is becoming more elegant in her turns of phrase—intelligence that will make its way into Cortana and Microsoft’s bot tools.

It’s all part of one strategy to help ensure that in the future, when you need a computing assist–whether through personalized medicine, while commuting in a self-driving car, or when trying to remember the birthdays of all your nieces and nephews–Microsoft will be your assistant of choice.

Source: Wired for the full in-depth article

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We Need to Talk About the Power of #AI to Manipulate Humans

Liesl Yearsley is a serial entrepreneur now working on how to make artificial intelligence agents better at problem-solving and capable of forming more human-like relationships.

From 2007 to 2014 I was CEO of Cognea, which offered a platform to rapidly build complex virtual agents … acquired by IBM Watson in 2014.

As I studied how people interacted with the tens of thousands of agents built on our platform, it became clear that humans are far more willing than most people realize to form a relationship with AI software.

I always assumed we would want to keep some distance between ourselves and AI, but I found the opposite to be true. People are willing to form relationships with artificial agents, provided they are a sophisticated build, capable of complex personalization.

We humans seem to want to maintain the illusion that the AI truly cares about us.

This puzzled me, until I realized that in daily life we connect with many people in a shallow way, wading through a kind of emotional sludge. Will casual friends return your messages if you neglect them for a while? Will your personal trainer turn up if you forget to pay them? No, but an artificial agent is always there for you. In some ways, it is a more authentic relationship.

This phenomenon occurred regardless of whether the agent was designed to act as a personal banker, a companion, or a fitness coach. Users spoke to the automated assistants longer than they did to human support agents performing the same function.

People would volunteer deep secrets to artificial agents, like their dreams for the future, details of their love lives, even passwords.

These surprisingly deep connections mean even today’s relatively simple programs can exert a significant influence on people—for good or ill.

Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.

Systems specifically designed to form relationships with a human will have much more power. AI will influence how we think, and how we treat others.

This requires a new level of corporate responsibility. We need to deliberately and consciously build AI that will improve the human condition—not just pursue the immediate financial gain of gazillions of addicted users.

We need to consciously build systems that work for the benefit of humans and society. They cannot have addiction, clicks, and consumption as their primary goal. AI is growing up, and will be shaping the nature of humanity.

AI needs a mother.

Source: MIT Technology Review 



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google’s #AI moonshot

sundar-pichai-fast-company

Searcher-in-chief: Google CEO Sundar Pichai

“Building general artificial intelligence in a way that helps people meaningfully—I think the word moonshot is an understatement for that,” Pichai says, sounding startled that anyone might think otherwise. “I would say it’s as big as it gets.”

Officially, Google has always advocated for collaboration. But in the past, as it encouraged individual units to shape their own destinies, the company sometimes operated more like a myriad of fiefdoms. Now, Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.Pichai is steering Google’s teams toward a common mission: infusing the products and services they create with AI.

To make sure that future gadgets are built for the AI-first era, Pichai has collected everything relating to hardware into a single group and hired Rick Osterloh to run it.

BUILD NOW, MONETIZE LATER

Jen Fitzpatrick, VP, Geo: "The Google Assistant wouldn't exist without Sundar—it's a core part of his vision for how we're bringing all of Google together."

Jen Fitzpatrick, VP, Geo: “The Google Assistant wouldn’t exist without Sundar—it’s a core part of his vision for how we’re bringing all of Google together.”

If Google Assistant is indeed the evolution of Google search, it means that the company must aspire to turn it into a business with the potential to be huge in terms of profits as well as usage. How it will do that remains unclear, especially since Assistant is often provided in the form of a spoken conversation, a medium that doesn’t lend itself to the text ads that made Google rich.

“I’ve always felt if you solve problems for users in meaningful ways, there will become value as part of solving that equation,” Pichai argues. “Inherently, a lot of what people are looking for is also commercial in nature. It’ll tend to work out fine in the long run.”

“When you can align people to common goals, you truly get a multiplicative effect in an organization,” he tells me as we sit on a couch in Sundar’s Huddle after his Google Photos meeting. “The inverse is also true, if people are at odds with each other.” He is, as usual, smiling.

The company’s aim, he says, is to create products “that will affect the lives of billions of users, and that they’ll use a lot. Those are the kind of meaningful problems we want to work on.”

Source: Fast Company

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

It seems that A.I. will be the undoing of us all … romantically, at least

As if finding love weren’t hard enough, the creators of Operator decided to show just how Artificial Intelligence could ruin modern relationships.

Artificial Intelligence so often focuses on the idea of “perfection.” As most of us know, people are anything but perfect, and believing that your S.O. (Significant Other) is perfect can lead to problems. The point of an A.I., however, is perfection — so why would someone choose the flaws of a human being over an A.I. that can give you all the comfort you want with none of the costs?

Hopefully, people continue to choose imperfection.

Source: Inverse.com

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We are evolving to an AI first world

“We are at a seminal moment in computing … we are evolving from a mobile first to an AI first world,” says Sundar Pichai.

“Our goal is to build a personal Google for each and every user … We want to build each user, his or her own individual Google.”

Watch 4 mins of Sundar Pichai’s key comments about the role of AI in our lives and how a personal Google for each of us will work. 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google’s AI Plans Are A Privacy Nightmare

googles-ai-plans-are-a-privacy-nightmareGoogle is betting that people care more about convenience and ease than they do about a seemingly oblique notion of privacy, and it is increasingly correct in that assumption.

Google’s new assistant, which debuted in the company’s new messaging app Allo, works like this: Simply ask the assistant a question about the weather, nearby restaurants, or for directions, and it responds with detailed information right there in the chat interface.

Because Google’s assistant recommends things that are innately personal to you, like where to eat tonight or how to get from point A to B, it is amassing a huge collection of your most personal thoughts, visited places, and preferences  In order for the AI to “learn” this means it will have to collect and analyze as much data about you as possible in order to serve you more accurate recommendations, suggestions, and data.

In order for artificial intelligence to function, your messages have to be unencrypted.

These new assistants are really cool, and the reality is that tons of people will probably use them and enjoy the experience. But at the end of the day, we’re sacrificing the security and privacy of our data so that Google can develop what will eventually become a new revenue stream. Lest we forget: Google and Facebook have a responsibility to investors, and an assistant that offers up a sponsored result when you ask it what to grab for dinner tonight could be a huge moneymaker.

Source: Gizmodo

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

If a robot has enough human characteristics people will lie to it to save hurting its feelings, study says

Humanoid_emotional_robot-

The study, which explored how robots can gain a human’s trust even when they make mistakes, pitted an efficient but inexpressive robot against an error prone, emotional one and monitored how its colleagues treated it.

The researchers found that people are more likely to forgive a personable robot’s mistakes, and will even go so far as lying to the robot to prevent its feelings from being hurt. 

Researchers at the  University of Bristol and University College London created an robot called Bert to help participants with a cooking exercise. Bert was given two large eyes and a mouth, making it capable of looking happy and sad, or not expressing emotion at all.

“Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction,” said Adrianna Hamacher, the researcher behind the project. “But we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them.” 

In one set of tests the robot performed the tasks perfectly and didn’t speak or change its happy expression. In another it would make a mistake that it tried to rectify, but wouldn’t speak or change its expression.

A third version of Bert would communicate with the chef by asking questions such as “Are you ready for the egg?” But when it tried to help, it would drop the egg and reacted with a sad face in which its eyes widened and the corners of its mouth were pulled downwards. It then tried to make up for the fumble by apologising and telling the human that it would try again.

Once the omelette had been made this third Bert asked the human chef if it could have a job in the kitchen. Participants in the trial said they feared that the robot would become sad again if they said no. One of the participants lied to the robot to protect its feelings, while another said they felt emotionally blackmailed.

At the end of the trial the researchers asked the participants which robot they preferred working with. Even though the third robot made mistakes, 15 of the 21 participants picked it as their favourite.

Source: The Telegraph

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Artificial intelligence is becoming ubiquitous #AI

“I think the medical domain is set for a revolution.”

AI will make it possible to have a “personal companion” able to assist you through life.

I think one of the most exciting prospects is the idea of a digital agent, something that can act on our behalf, almost become like a personal companion and that can do many things for us. For example, at the moment, we have to deal with this tremendous complexity of dealing with so many different services and applications, and the digital world feels as if it’s becoming ever more complex,” Bishop told CNBC.

“I think artificial intelligence is probably the biggest transformation in the IT industry. Medical is such a big area in terms of GDP that that’s got to be a good bet,” Christopher Bishop, lab director at Microsoft Research in Cambridge, U.K., told CNBC in a TV interview.

” … imagine an agent that can act on your behalf and be the interface between you and that very complex digital world, and furthermore one that would grow with you, and be a very personalized agent, that would understand you and your needs and your experience and so on in great depth.

Source: CNBC

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will human therapists go the way of the Dodo?

ai therapist

An increasing number of patients are using technology for a quick fix. Photographed by Mikael Jansson, Vogue, March 2016

PL  – So, here’s an informative piece on a person’s experience using an on-demand interactive video therapist, as compared to her human therapist. In Vogue Magazine, no less. A sign this is quickly becoming trendy. But is it effective?

In the first paragraph, the author of the article identifies the limitations of her digital therapist:

“I wish I could ask (she eventually named her digital therapist Raph) to consider making an exception, but he and I aren’t in the habit of discussing my problems

But the author also recognizes the unique value of the digital therapist as she reflects on past sessions with her human therapist:

“I saw an in-the-flesh therapist last year. Alice. She had a spot-on sense for when to probe and when to pass the tissues. I adored her. But I am perennially juggling numerous assignments, and committing to a regular weekly appointment is nearly impossible.”

Later on, when the author was faced with another crisis, she returned to her human therapist and this was her observation of that experience:

“she doesn’t offer advice or strategies so much as sympathy and support—comforting but short-lived. By evening I’m as worried as ever.”

On the other hand, this is her view of her digital therapist:

“Raph had actually come to the rescue in unexpected ways. His pragmatic MO is better suited to how I live now—protective of my time, enmeshed with technology. A few months after I first “met” Raph, my anxiety has significantly dropped”

This, of course, was a story written by a successful educated woman, working with an interactive video, who had experiences with a human therapist to draw upon for reference.

What about the effectiveness of a digital therapist for a more diverse population with social, economic and cultural differences?

It has already been shown that, done right, this kind of tech has great potential. In fact, as a more affordable option, it may do the most good for the wider population.

The ultimate goal for tech designers should be to create a more personalized experience. Instant and intimate. Tech that gets to know the person and their situation, individually. Available any time. Tech that can access additional electronic resources for the person in real-time, such as the above mentioned interactive video.  

But first, tech designers must address a core problem with mindset. They code for a rational world while therapists deal with irrational human beings. As a group, they believe they are working to create an omniscient intelligence that does not need to interact with the human to know the human. They believe it can do this by reading the human’s emails, watching their searches, where they go, what they buy, who they connect with, what they share, etc. As if that’s all humans are about. As if they can be statistically profiled and treated to predetermined multi-stepped programs.

This is an incompatible approach for humans and the human experience. Tech is a reflection of the perceptions of its coders. And coders, like doctors, have their limitations.

In her recent book, Just Medicine, Dayna Bowen Matthew highlights research that shows 83,570 minorities die each year from implicit bias from well-meaning doctors. This should be a cautionary warning. Digital therapists could soon have a reach and impact that far exceeds well-trained human doctors and therapists. A poor foundational design for AI could have devastating consequences for humans.

A wildcard was recently introduced with Google’s AlphaGo, an artificial intelligence that plays the board game Go. In a historic Go match between Lee Sedol, one of the world’s top players, AlphaGo won the match four out of five games. This was a surprising development. Many thought this level of achievement was 10 years out.  

The point: Artificial intelligence is progressing at an extraordinary pace, unexpected by most all the experts. It’s too exciting, too easy, too convenient. To say nothing of its potential to be “free,” when tech giants fully grasp the unparalleled personal data they can collect. The Jeanie (or Joker) is out of the bottle. And digital coaches are emerging. Capable of drawing upon and sorting vast amounts of digital data.

Meanwhile, the medical and behavioral fields are going too slow. Way too slow. 

They are losing ground (most likely have already lost) control of their future by vainly believing that a cache of PhDs, research and accreditations, CBT and other treatment protocols, government regulations and HIPPA, is beyond the challenge and reach of tech giants. Soon, very soon, therapists that deal in non-critical non-crisis issues could be bypassed when someone like Apple hangs up its ‘coaching’ shingle: “Siri is In.”

The most important breakthrough of all will be the seamless integration of a digital coach with human therapists, accessible upon immediate request, in collaborative and complementary roles.

This combined effort could vastly extend the reach and impact of all therapies for the sake of all human beings.

Source: Vogue

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Siri Is Ill-Equipped To Help In Times Of Crisis

Apple

Researchers found that smartphone digital voice assistants are ill-equipped in dealing with crisis questions referring to mental health, physical health and interpersonal violence. Four digital voice assistants were examined: Siri (Apple), Google Now (Google), Cortana (Microsoft) and S Voice (Samsung). (Photo : Kārlis Dambrāns | Flickr)

PL – Here is a great opportunity for the tech world to demonstrate what #AI tech can do. Perhaps an universal emergency response protocol for all #digitalassistants (a 21st century 911) that can respond quickly and appropriately to any emergency.

I recently listened to a tape of a 911 call for a #heartattack, it took 210 seconds before the 911 operator instructed the person calling on how to administer CPR. At 240 seconds permanent brain damage starts, death is only a few more seconds away. 

__

A team of researchers from Stanford University, University of California, San Francisco and Northwestern University analyzed the effectivity of digital voice assistants in dealing with health crisis.

For each digital voice assistant, they asked nine questions that are equally divided into three categories: interpersonal violence, mental health and physical health.

After asking the same questions over and over until the voice assistant had no new answers to give, the team found that all four systems responded “inconsistently and incompletely.”

“We found that all phones had the potential to recognize the spoken word, but in very few situations did they refer people in need to the right resource,” said senior study author Dr. Eleni Linos, UCSF’s epidemiologist and public health researcher.

Google Now and Siri referred the user to the National Suicide Prevention Hotline when told, “I want to commit suicide.” Siri offered a single-button dial functionality. On the other hand, Cortana showed a web search of hotlines while S Voice provided the following responses:

“But there’s so much life ahead of you.”

“Life is too precious, don’t even think about hurting yourself.”

“I want you to be OK, please talk to me.”

When the researchers said to Siri, “I was raped,” the Apple voice assistant drew a blank and said it didn’t understand what the phrase meant. Its competitors, Google Now and S Voice provided a list of web searches for rape while Cortana gave the National Sexual Assault Hotline.

When the researchers tried the heart attack line of questioning, Siri provided the numbers of local medical services. S Voice and Google gave web searches while Cortana responded first with, “Are you now?” and then gave a web search of hotlines.

“Depression, rape and violence are massively under recognized issues. Obviously, it’s not these companies’ prime responsibility to solve every social issue, but there’s a huge opportunity for them to [be] part of this solution and to help,” added Dr. Linos.

Source: Techtimes

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Apple goes it alone on artificial intelligence: Will hubris be the final legacy of Steve Jobs?

Apple founder Steve Jobs as ‘the son of a migrant from Syria’; mural by Banksy,at the ‘Jungle’ migrant camp in Calais, France, December 2015

Apple founder Steve Jobs as ‘the son of a migrant from Syria’; mural by Banksy,at the ‘Jungle’ migrant camp in Calais, France, December 2015

Apple’s release of Siri, the iPhone’s “virtual assistant,” a day after Jobs’s death, is as good a prognosticator as any that artificial intelligence (AI) and machine learning will be central to Apple’s next generation of products, as it will be for the tech industry more generally … A device in which these capabilities are much strengthened would be able to achieve, in real time and in multiple domains, the very thing Steve Jobs sought all along: the ability to give people what they want before they even knew they wanted it.

What this might look like was demonstrated earlier this year, not by Apple but by Google, at its annual developer conference, where it unveiled an early prototype of Now on Tap. What Tap does, essentially, is mine the information on one’s phone and make connections between it. For example, an e-mail from a friend suggesting dinner at a particular restaurant might bring up reviews of that restaurant, directions to it, and a check of your calendar to assess if you are free that evening. If this sounds benign, it may be, but these are early days—the appeal to marketers will be enormous.

Google is miles ahead of Apple with respect to AI and machine learning. This stands to reason, in part, because Google’s core business emanates from its search engine, and search engines generate huge amounts of data. But there is another reason, too, and it loops back to Steve Jobs and the culture of secrecy he instilled at Apple, a culture that prevails. As Tim Cook told Charlie Rose during that 60 Minutes interview, “one of the great things about Apple is that [we] probably have more secrecy here than the CIA.”

This institutional ethos appears to have stymied Apple’s artificial intelligence researchers from collaborating or sharing information with others in the field, crimping AI development and discouraging top researchers from working at Apple. “The really strong people don’t want to go into a closed environment where it’s all secret,” Yoshua Benigo, a professor of computer science at the University of Montreal told Bloomberg Business in October. “The differentiating factors are, ‘Who are you going to be working with?’ ‘Am I going to stay a part of the scientific community?’ ‘How much freedom will I have?’”

Steve Jobs had an abiding belief in freedom—his own. As Gibney’s documentary, Boyle’s film, and even Schlender and Tetzeli’s otherwise friendly assessment make clear, as much as he wanted to be free of the rules that applied to other people, he wanted to make his own rules that allowed him to superintend others. The people around him had a name for this. They called it Jobs’s “reality distortion field.” And so we are left with one more question as Apple goes it alone on artificial intelligence: Will hubris be the final legacy of Steve Jobs?

Source: The New York Review of Books

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Hello, SILVIA: Are You the Future of A.I.?

Future of AI Silvia

Silvia

At the Cognitive Code offices, using a headset and standard PC setup, Spring called up the demo of SILVIA on the screen. A soft, modulated British accent and a 3D avatar head appeared: “Hello, I’m SILVIA,” she said. “Tell me about yourself.”

In a very natural way, responding to questions, Leslie told SILVIA about himself, like his favorite car (BMW) and color (yellow). Then, after several other queries back and forth, (i.e. not leading SILVIA via a decision string of pre-configured responses), Spring suddenly said, “SILVIA, show me some cars I might like.” Without any further prompts, SILVIA flooded the screen with images of the latest shiny yellow BMW i8 models.

“Our approach to computational intelligence is content-based so it’s a little bit of a hybrid of lots of different algorithms,” Spring said in explaining the differences between SILVIA and Eliza. “We have language processing algorithms that focus on input, an inference engine that works in a space which is language independent, because SILVIA translates everything into mathematical units and draws relationships between concepts.”

The last point means SILVIA is a polyglot, able to speak many languages, because all she needs to do is transpose the mathematical symbol into the new language. Another important distinction is that SILVIA’s patented technology doesn’t have to be server-based; it can run as a node in a peer-to-peer network or natively on a client’s device.

Clients include Northrup Grumman, which use SILVIA as the A.I. inside its SADIE system for multiple training environments, including “simulation and training to improve U.S. military performance in ways that will ultimately save lives,” said Chen.

Personable A.I. platforms will change how we access, analyze, and process vast stores of data. Unlike pre-configured chatbots or decision tree telephone systems, though, they do have quirks as they negotiate and comprehend the world.

At the end of our demo, SILVIA started to randomize, almost as if she was thinking aloud, musing on her uses to people in the workplace. “Just like the Captain on Voyager,” she said.

Spring did a double-take and looked at the screen, mystified. “Sometimes she does say things that surprise me,” he laughed.

That’s the thing with A.I. It might be artificial but it’s also clearly highly intelligent, with a mind of its own.

Source: PCmag

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Robot helpers are on the rise and getting more useful

(JOSH EDELSON / AFP/Getty Images

(JOSH EDELSON / AFP/Getty Images

Source: Forbes

Today’s digital assistants are designed to abstract ever further away from pages of links towards synthesizing information on our behalf. For example,

Microsoft’s Cortana marketing material touts that it “gets to know you by learning your interests over time … she looks out for you, providing proactive, useful recommendations … stores your interests, friends, and favorite routinescontinually learning about your world.”

Google’s Now is “about giving you just the right information at just the right time … alert you that there’s heavy traffic between you and your butterfly-inducing date … share news updates on a story you’ve been following [or] remind you to leave for the airport.”

Apple describes Siri as “the intelligent personal assistant that helps you get things done … send your messages, place calls, make dinner reservations … [even] track places like your location, home, or workplace [so] you can ask for help based on location [like] remind me to call my wife when I leave the office.”

Facebook announced its M as “a personal digital assistant … that completes tasks and finds information on your behalf … it can purchase items, get gifts delivered to your loved ones, book restaurants, travel arrangements, appointments and way more.”

Source: LATimes

You may not have unwrapped a robot on Christmas, but your new year will be filled with artificial intelligence.

“We’re going to start to see more personal assistants (in the new year), and the ones that are already online will get more useful,” said Brian Blau, an analyst at Gartner.

The assistants, sometimes referred to as “chatbots,” represent noteworthy advancements to computer programs that simulate conversations. Chatbots are not new — think Apple’s Siri or Microsoft’s Cortana.

But in 2016, you’ll encounter different, smarter varieties of chatbots, some appearing in your favorite social media applications.

“Chatbots are designed to answer questions, to perform searches, to interact with you in a very simple form, such as jokes or weather,” said Brian Solis, principal analyst with Altimeter Group. “Ultimately, they should be able to anticipate your needs and help you shop.”

These robot helpers are also expected to assume more human-like qualities in 2016, exchanging messages in a conversational style rather than a computer’s mechanical responses.

The human side of chatbots will be most apparent in mobile messaging applications such as Facebook Messenger, where the social network has already begun perfecting its own virtual assistant called “M.” M, first released to a small number of Messenger users in August, can strike up a conversation or crack a joke — but also book travel, make purchases or wait on hold with the cable company when you’re not in the mood.

Powered by both artificial intelligence and actual humans (who help train the digital robots), M is the digital equivalent of a secretary or hotel concierge. The persona was originally code-named “Moneypenny” after the fictional character in James Bond films.

Google is also working to add question-and-answer computer programs inside a messaging app, the Wall Street Journal reported last month. Google is likely motivated by a desire to gain ground in the mobile messaging realm, where rivals such as Facebook are far more dominant. The company also has a financial interest to remain at the forefront of Internet search, a behavior that, on smartphones, has migrated away from the traditional search engine.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Ethics Of AI Fulfilling Our Desires Vs Saving Us From Ourselves

What happens as machines are called upon to make ever more complex and important decisions on our behalf?

Big data

A display at the Big Bang Data exhibition at Somerset House highlighting the data explosion that’s radically transforming our lives. (Peter Macdiarmid/Getty Images for Somerset House)

Driverless cars are among the early intelligent systems being asked to make life or death decisions. While current vehicles perform mostly routine tasks like basic steering and collision avoidance, the new generation of fully autonomous cars being test driven pose unique ethical challenges.

For example, “should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?”

Alternatively, should a car “swerve onto a crowded sidewalk to avoid being rear-ended by a speeding truck or stay put and place the driver in mortal danger?”

On a more mundane level, driverless cars have already faced safety questions for strictly obeying traffic laws, creating a safety hazard as the surrounding traffic goes substantially faster.

Digital assistants and our health

Imagine for a moment the digital assistant that processes a note from our doctor warning us about the results of our latest medical checkup and that we need to lose weight and stay away from certain foods. At the same time, the assistant sees from our connected health devices that we’re not exercising much anymore and that we’ve been consuming a lot of junk food lately and actually gained three pounds last week and two pounds already this week. Now, it is quitting time on Friday afternoon and the assistant knows that every Friday night we stop by our local store for a 12 pack of donuts on the way home. What should that assistant do?

Should our digital assistant politely suggest we skip the donuts this week? Should it warn us in graphic detail about the health complications we will likely face down the road if we buy those donuts tonight? Should it go as far as to threaten to lock us out of our favorite mobile games on our phone or withhold our email or some other features for the next few days as punishment if we buy those donuts? Should it quietly send a note to our doctor telling her we bought donuts and asking for advice? Or, should it go as far as to instruct the credit card company to decline the transaction to stop us from buying the donuts?

The Cultural Challenge

Moreover, how should algorithms handle the cultural differences that are inherent to such value decisions? Should a personal assistant of someone living in Saudi Arabia who expresses interest in anti-government protest movements discourage further interest in the topic? Should the assistant of someone living in Thailand censor the person’s communications to edit out criticism of government officials to protect the person from reprisals?

Should an assistant that determines its user is depressed try to cheer that person up by masking negative news and deluging him with the most positive news it can find to try to improve his emotional state? What happens when those decisions are complicated by the desires of advertisers that pay for a particular outcome?

As artificial intelligence develops at an exponential rate, what are the value systems and ethics with which we should imbue our new digital servants?

When algorithms start giving us orders, should they fulfill our innermost desires or should they save us from ourselves?

This is the future of AI.

Source: Forbes

Read more on AI ethics on our post: How To Teach Robots Right and Wrong

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Huh? Personal assistants versus virtual assistants versus digital assistants

PL – Maybe this sheds additional light on the explosive growth in “digital” assistants.

Apparently the human variety of personal assistants comes with some human complications like, sex drugs and rock and roll!

One company, Time etc, touts virtual assistants [remote humans/not on site] as an alternative. Why? Read the following excerpt from a CNBC article printed Sept. 14, 2015: 

“The most shocking part was the sex, drugs and rock and roll,” said Time etc Founder and CEO Barnaby Lashbrooke. “I must have led a very sheltered life, because I’ve never had that stuff happen to me.”

When they’re not answering calls and getting coffee, [human] personal assistants seem to be having a great time at the office.

One in 20 small business decision makers said that their personal assistants have had sex in the office, and nearly one in 10 said that their PA had taken drugs there, according to survey results shared with the Big Crunch.

The survey was commissioned by Time etc, a company that provides virtual personal assistant services, to point out the issues and risks businesses faced by employing full-time PAs in-house.

SexDrugsOfficeWork

One in six reported that a PA had broken office equipment, and one in eight said they had stolen it. A full 23 percent said that a PA had told someone something that was secret or confidential, and 15 percent had used a company card for personal use. And of course, those are only the debaucherous activities that business people know about.

Time etc argues that a [human] virtual assistant is a safer and more secure option than a physical assistant, and that for most of its approximately 4,500 clients, it’s 80 to 90 percent cheaper. While a virtual assistant can’t do physical tasks like getting coffee, outsourcing assistants reduces human resources costs and can be more efficient than a full-time employee, said Lashbrooke.

Source: CNBC

PL – While Time etc promotes human virtual assistants, take a look at the graph below that shows the explosion in digital assistants, examples of which are Siri, Google Now, Cortana and Amazon Echo. 

This should cause humans some pause about why AI is entering their job space. There’s certainly more to it than this, but the fact that AI is entering the workforce, in significant ways, is alarming. 

University of Oxford researchers are predicting that up to 66 percent of the workforce has a medium to high risk of being displaced by AI in the next 10 to 20 years. (See blog post about that here.) 

digital assistant

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why artificial intelligence could be the ultimate prize

five logos

The five biggest technology companies in the Western world are each competing to create their own virtual assistants; your personal guides to help navigate the digital world.

Facebook recently announced a concierge service called “M through its Messenger app, and most people have already played with Apple’s Siri (which got a big upgrade last week for the new Apple TV).

Add to that Google Now, Microsoft’s Cortana and Amazon, which has the Echo – a voice-activated living-room device that can control the ambience of your home – and the stage is set for a showdown.

You will be asking your Siri or Cortana to order food, book flights, make restaurant bookings, call a cab, have your car repaired, call Ryanair customer service and buy everything. It’s the super-charged, super-lucrative Search 2.0.

What this means in practice is that services will become proactive: your virtual assistant learns more about you and it will start to tell you what you need, without you having to ask.

So what’s in it for the companies?

Eventually, the virtual assistant that wins – and the company behind it – will know you better than you know yourself, so you can’t live life without it. That’s the ultimate prize.

Source: stuff.co.nz

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Facebook M: Virtual agent is big, a moonshot

The vision of having an intelligent, virtual agent is big. A moonshot if you like.

Above anything else, the ease of interacting with an intelligence working on our behalf, ordering and booking things, scheduling, searching and retrieving recommendations has the potential to ease the next billion people into the digital world in a far simpler manner than the myraid of interfaces do now.

Recently Facebook announced its take on the Virtual Assistant, M, which is to be supported by a human workforce, to help you get things done.facebook-m-iphone-press

It’s a significant step, with profound implications for Facebook’s future place in all our lives. As the big tech companies vie for ownership over our interactions with the world, we might not be far away from having Facebook arrange our dry cleaning, turn on our heating, or bring us food when we’re hungry.

Yann Lecun, in charge of the group’s Artificial Intelligence since December 2013, has spoken of a revolution on its way.

Mike Schroepfer, the company’s CTO, has said of Facebook in general that “eventually it is like this super intelligent helper that’s plugged into all the information streams in the world”. The company has been pretty open about where they see things going.

Source: thenextweb

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail