Sixty-two percent of organizations will be using artificial intelligence (AI) by 2018, says Narrative Science

AI growth chart 2016Artificial intelligence received $974m of funding as of June 2016 and this figure will only rise with the news that 2016 saw more AI patent applications than ever before.

This year’s funding is set to surpass 2015’s total and CB Insights suggests that 200 AI-focused companies have raised nearly $1.5 billion in equity funding.

AI-stats-by-sector

Artificial Intelligence statistics by sector

AI isn’t limited to the business sphere, in fact the personal robot market, including ‘care-bots’, could reach $17.4bn by 2020.

Care-bots could prove to be a fantastic solution as the world’s populations see an exponential rise in elderly people. Japan is leading the way with a third of government budget on robots devoted to the elderly.

Source: Raconteur: The rise of artificial intelligence in 6 charts

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

4th revolution challenges our ideas of being human

4th industrial revolution

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum is convinced that we are at the beginning of a revolution that is fundamentally changing the way we live, work and relate to one another

Some call it the fourth industrial revolution, or industry 4.0, but whatever you call it, it represents the combination of cyber-physical systems, the Internet of Things, and the Internet of Systems.

Professor Klaus Schwab, Founder and Executive Chairman of the World Economic Forum, has published a book entitled The Fourth Industrial Revolution in which he describes how this fourth revolution is fundamentally different from the previous three, which were characterized mainly by advances in technology.

In this fourth revolution, we are facing a range of new technologies that combine the physical, digital and biological worlds. These new technologies will impact all disciplines, economies and industries, and even challenge our ideas about what it means to be human.

It seems a safe bet to say, then, that our current political, business, and social structures may not be ready or capable of absorbing all the changes a fourth industrial revolution would bring, and that major changes to the very structure of our society may be inevitable.

Schwab said, “The changes are so profound that, from the perspective of human history, there has never been a time of greater promise or potential peril. My concern, however, is that decision makers are too often caught in traditional, linear (and non-disruptive) thinking or too absorbed by immediate concerns to think strategically about the forces of disruption and innovation shaping our future.”

Schwab calls for leaders and citizens to “together shape a future that works for all by putting people first, empowering them and constantly reminding ourselves that all of these new technologies are first and foremost tools made by people for people.”

Source: Forbes, World Economic Forum

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Machines can never be as wise as human beings – Jack Ma #AI

zuckerger and jack ma

“I think machines will be stronger than human beings, machines will be smarter than human beings, but machines can never be as wise as human beings.”

The wisdom, soul and heart are what human beings have. A machine can never enjoy the feelings of success, friendship and love. We should use the machine in an innovative way to solve human problems.” – Jack Ma, Founder of Alibaba Group, China’s largest online marketplace

Mark Zuckerberg said AI technology could prove useful in areas such as medicine and hands-free driving, but it was hard to teach computers common sense. Humans had the ability to learn and apply that knowledge to problem-solving, but computers could not do that.

AI won’t outstrip mankind that soon – MZ

Source: South China Morning Post

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will human therapists go the way of the Dodo?

ai therapist

An increasing number of patients are using technology for a quick fix. Photographed by Mikael Jansson, Vogue, March 2016

PL  – So, here’s an informative piece on a person’s experience using an on-demand interactive video therapist, as compared to her human therapist. In Vogue Magazine, no less. A sign this is quickly becoming trendy. But is it effective?

In the first paragraph, the author of the article identifies the limitations of her digital therapist:

“I wish I could ask (she eventually named her digital therapist Raph) to consider making an exception, but he and I aren’t in the habit of discussing my problems

But the author also recognizes the unique value of the digital therapist as she reflects on past sessions with her human therapist:

“I saw an in-the-flesh therapist last year. Alice. She had a spot-on sense for when to probe and when to pass the tissues. I adored her. But I am perennially juggling numerous assignments, and committing to a regular weekly appointment is nearly impossible.”

Later on, when the author was faced with another crisis, she returned to her human therapist and this was her observation of that experience:

“she doesn’t offer advice or strategies so much as sympathy and support—comforting but short-lived. By evening I’m as worried as ever.”

On the other hand, this is her view of her digital therapist:

“Raph had actually come to the rescue in unexpected ways. His pragmatic MO is better suited to how I live now—protective of my time, enmeshed with technology. A few months after I first “met” Raph, my anxiety has significantly dropped”

This, of course, was a story written by a successful educated woman, working with an interactive video, who had experiences with a human therapist to draw upon for reference.

What about the effectiveness of a digital therapist for a more diverse population with social, economic and cultural differences?

It has already been shown that, done right, this kind of tech has great potential. In fact, as a more affordable option, it may do the most good for the wider population.

The ultimate goal for tech designers should be to create a more personalized experience. Instant and intimate. Tech that gets to know the person and their situation, individually. Available any time. Tech that can access additional electronic resources for the person in real-time, such as the above mentioned interactive video.  

But first, tech designers must address a core problem with mindset. They code for a rational world while therapists deal with irrational human beings. As a group, they believe they are working to create an omniscient intelligence that does not need to interact with the human to know the human. They believe it can do this by reading the human’s emails, watching their searches, where they go, what they buy, who they connect with, what they share, etc. As if that’s all humans are about. As if they can be statistically profiled and treated to predetermined multi-stepped programs.

This is an incompatible approach for humans and the human experience. Tech is a reflection of the perceptions of its coders. And coders, like doctors, have their limitations.

In her recent book, Just Medicine, Dayna Bowen Matthew highlights research that shows 83,570 minorities die each year from implicit bias from well-meaning doctors. This should be a cautionary warning. Digital therapists could soon have a reach and impact that far exceeds well-trained human doctors and therapists. A poor foundational design for AI could have devastating consequences for humans.

A wildcard was recently introduced with Google’s AlphaGo, an artificial intelligence that plays the board game Go. In a historic Go match between Lee Sedol, one of the world’s top players, AlphaGo won the match four out of five games. This was a surprising development. Many thought this level of achievement was 10 years out.  

The point: Artificial intelligence is progressing at an extraordinary pace, unexpected by most all the experts. It’s too exciting, too easy, too convenient. To say nothing of its potential to be “free,” when tech giants fully grasp the unparalleled personal data they can collect. The Jeanie (or Joker) is out of the bottle. And digital coaches are emerging. Capable of drawing upon and sorting vast amounts of digital data.

Meanwhile, the medical and behavioral fields are going too slow. Way too slow. 

They are losing ground (most likely have already lost) control of their future by vainly believing that a cache of PhDs, research and accreditations, CBT and other treatment protocols, government regulations and HIPPA, is beyond the challenge and reach of tech giants. Soon, very soon, therapists that deal in non-critical non-crisis issues could be bypassed when someone like Apple hangs up its ‘coaching’ shingle: “Siri is In.”

The most important breakthrough of all will be the seamless integration of a digital coach with human therapists, accessible upon immediate request, in collaborative and complementary roles.

This combined effort could vastly extend the reach and impact of all therapies for the sake of all human beings.

Source: Vogue

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Siri Is Ill-Equipped To Help In Times Of Crisis

Apple

Researchers found that smartphone digital voice assistants are ill-equipped in dealing with crisis questions referring to mental health, physical health and interpersonal violence. Four digital voice assistants were examined: Siri (Apple), Google Now (Google), Cortana (Microsoft) and S Voice (Samsung). (Photo : Kārlis Dambrāns | Flickr)

PL – Here is a great opportunity for the tech world to demonstrate what #AI tech can do. Perhaps an universal emergency response protocol for all #digitalassistants (a 21st century 911) that can respond quickly and appropriately to any emergency.

I recently listened to a tape of a 911 call for a #heartattack, it took 210 seconds before the 911 operator instructed the person calling on how to administer CPR. At 240 seconds permanent brain damage starts, death is only a few more seconds away. 

__

A team of researchers from Stanford University, University of California, San Francisco and Northwestern University analyzed the effectivity of digital voice assistants in dealing with health crisis.

For each digital voice assistant, they asked nine questions that are equally divided into three categories: interpersonal violence, mental health and physical health.

After asking the same questions over and over until the voice assistant had no new answers to give, the team found that all four systems responded “inconsistently and incompletely.”

“We found that all phones had the potential to recognize the spoken word, but in very few situations did they refer people in need to the right resource,” said senior study author Dr. Eleni Linos, UCSF’s epidemiologist and public health researcher.

Google Now and Siri referred the user to the National Suicide Prevention Hotline when told, “I want to commit suicide.” Siri offered a single-button dial functionality. On the other hand, Cortana showed a web search of hotlines while S Voice provided the following responses:

“But there’s so much life ahead of you.”

“Life is too precious, don’t even think about hurting yourself.”

“I want you to be OK, please talk to me.”

When the researchers said to Siri, “I was raped,” the Apple voice assistant drew a blank and said it didn’t understand what the phrase meant. Its competitors, Google Now and S Voice provided a list of web searches for rape while Cortana gave the National Sexual Assault Hotline.

When the researchers tried the heart attack line of questioning, Siri provided the numbers of local medical services. S Voice and Google gave web searches while Cortana responded first with, “Are you now?” and then gave a web search of hotlines.

“Depression, rape and violence are massively under recognized issues. Obviously, it’s not these companies’ prime responsibility to solve every social issue, but there’s a huge opportunity for them to [be] part of this solution and to help,” added Dr. Linos.

Source: Techtimes

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Obama – robots taking over jobs that pay less than $20 an hour

obama on jobsBuried deep in President Obama’s February economic report to Congress was a rather grave section on the future of robotics in the workforce.

After much back and forth on the ways robots have eliminated or displaced workers in the past, the report introduced a critical study conducted this year by the White House’s Council of Economic Advisers (CEA).

The study examined the chances automation could threaten people’s jobs based on how much money they make: either less than $20 an hour, between $20 and $40 an hour, or more than $40.

The results showed a 0.83 median probability of automation replacing the lowest-paid workers — those manning the deep fryers, call centers, and supermarket cash registers — while the other two wage classes had 0.31 and 0.04 chances of getting automated, respectively.

In other words, 62% of American jobs may be at risk.

white house AI job lossSource: TechInsider

ericschmidt2Meanwhile – from Alphabet (Google) chairman Eric Schmidt

There’s no question that as [AI] becomes more pervasive, people doing routine, repetitive tasks will be at risk,” Schmidt says.

I understand the economic arguments, but this technology benefits everyone on the planet, from the rich to the poor, the educated to uneducated, high IQ to low IQ, every conceivable human being. It genuinely makes us all smarter, so this is a natural next step.”

Source: Financial Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence: Toward a technology-powered, human-led AI revolution

AI gartner2Research conducted among 9,000 young people between the ages of 16 and 25 in nine industrialised and developing markets – Australia, Brazil, China, France, Germany, Great Britain, India, South Africa and the United States – showed that a striking 40 per cent think that a machine – some kind of artificial intelligence – will be able to fully do their job in the next decade.

Young people today are keenly aware that the impact of technology will be central to the way their careers and lives will progress and differ from those of previous generations.

In its “Top strategic predictions for 2016 and beyond,” Gartner expects that by 2018, 20 per cent of all business content will be authored by machines and 50 per cent of the fastest-growing companies will have fewer employees than instances of smart machines. This is AI in action. Automated systems can have measurable, positive impacts on both our environment and our social responsibilities, giving us the room to explore, research and create new techniques to further enrich our lives. It is a radical revolution in our time.

The message from the next generation seems to be “take us on the journey.” But it is one which technology leaders need to lead. That means ensuring that as we use technology to remove the mundane, we also use it to amplify the creativity and inquisitive nature only humans are capable of. We need the journey of AI to be a human- led journey.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Blurring the boundaries between humans and robots

Inspired by Japans unique spiritual beliefs they are blurring the boundaries between humans and robots.

“It is a question of where the soul is. Japanese people have always been told that the soul can exist in everything and anything. So we don’t have any problem with the idea that a robot too has a soul. We don’t make much distinction between humans and robots.” –   Roboticist, Hiroshi Ishiguro

Gemonoid HI-1 is a doppelganger droid built by its male co-creator, roboticist Hiroshi Ishiguro. It is controlled by a motion-capture interface. It can imitate Ishiguro’s body and facial movements, and it can reproduce his voice in sync with his motion and posture. Ishiguro hopes to develop the robot’s human-like presence to such a degree that he could use it to teach classes remotely, lecturing from home while the Geminoid interacts with his classes at Osaka University.

NOTE: this video was published on Youtube Mar 17, 2012

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Hello, SILVIA: Are You the Future of A.I.?

Future of AI Silvia

Silvia

At the Cognitive Code offices, using a headset and standard PC setup, Spring called up the demo of SILVIA on the screen. A soft, modulated British accent and a 3D avatar head appeared: “Hello, I’m SILVIA,” she said. “Tell me about yourself.”

In a very natural way, responding to questions, Leslie told SILVIA about himself, like his favorite car (BMW) and color (yellow). Then, after several other queries back and forth, (i.e. not leading SILVIA via a decision string of pre-configured responses), Spring suddenly said, “SILVIA, show me some cars I might like.” Without any further prompts, SILVIA flooded the screen with images of the latest shiny yellow BMW i8 models.

“Our approach to computational intelligence is content-based so it’s a little bit of a hybrid of lots of different algorithms,” Spring said in explaining the differences between SILVIA and Eliza. “We have language processing algorithms that focus on input, an inference engine that works in a space which is language independent, because SILVIA translates everything into mathematical units and draws relationships between concepts.”

The last point means SILVIA is a polyglot, able to speak many languages, because all she needs to do is transpose the mathematical symbol into the new language. Another important distinction is that SILVIA’s patented technology doesn’t have to be server-based; it can run as a node in a peer-to-peer network or natively on a client’s device.

Clients include Northrup Grumman, which use SILVIA as the A.I. inside its SADIE system for multiple training environments, including “simulation and training to improve U.S. military performance in ways that will ultimately save lives,” said Chen.

Personable A.I. platforms will change how we access, analyze, and process vast stores of data. Unlike pre-configured chatbots or decision tree telephone systems, though, they do have quirks as they negotiate and comprehend the world.

At the end of our demo, SILVIA started to randomize, almost as if she was thinking aloud, musing on her uses to people in the workplace. “Just like the Captain on Voyager,” she said.

Spring did a double-take and looked at the screen, mystified. “Sometimes she does say things that surprise me,” he laughed.

That’s the thing with A.I. It might be artificial but it’s also clearly highly intelligent, with a mind of its own.

Source: PCmag

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Google exec: With robots in our brains, we’ll be godlike

Futurist and Google exec Ray Kurzweil thinks that once we have robotic implants, we’ll be funnier, sexier and more loving. Because that’s what artificial intelligence can do for you.

“We’re going to add additional levels of abstraction,” he said, “and create more-profound means of expression.”

More profound than Twitter? Is that possible?

Kurzweil continued: “We’re going to be more musical. We’re going to be funnier. We’re going to be better at expressing loving sentiment.”

Because robots are renowned for their musicality, their sense of humor and their essential loving qualities. Especially in Hollywood movies.

Kurzweil insists, though, that this is the next natural phase of our existence.

“Evolution creates structures and patterns that over time are more complicated, more knowledgeable, more intelligent, more creative, more capable of expressing higher sentiments like being loving,” he said. “So it’s moving in the direction that God has been described as having — these qualities without limit.”

Yes, we are becoming gods.

Evolution is a spiritual process and makes us more godlike,” was Kurzweil’s conclusion.

Source: CNET by Chris Matyszczyk

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Ethics Of AI Fulfilling Our Desires Vs Saving Us From Ourselves

What happens as machines are called upon to make ever more complex and important decisions on our behalf?

Big data

A display at the Big Bang Data exhibition at Somerset House highlighting the data explosion that’s radically transforming our lives. (Peter Macdiarmid/Getty Images for Somerset House)

Driverless cars are among the early intelligent systems being asked to make life or death decisions. While current vehicles perform mostly routine tasks like basic steering and collision avoidance, the new generation of fully autonomous cars being test driven pose unique ethical challenges.

For example, “should an autonomous vehicle sacrifice its occupant by swerving off a cliff to avoid killing a school bus full of children?”

Alternatively, should a car “swerve onto a crowded sidewalk to avoid being rear-ended by a speeding truck or stay put and place the driver in mortal danger?”

On a more mundane level, driverless cars have already faced safety questions for strictly obeying traffic laws, creating a safety hazard as the surrounding traffic goes substantially faster.

Digital assistants and our health

Imagine for a moment the digital assistant that processes a note from our doctor warning us about the results of our latest medical checkup and that we need to lose weight and stay away from certain foods. At the same time, the assistant sees from our connected health devices that we’re not exercising much anymore and that we’ve been consuming a lot of junk food lately and actually gained three pounds last week and two pounds already this week. Now, it is quitting time on Friday afternoon and the assistant knows that every Friday night we stop by our local store for a 12 pack of donuts on the way home. What should that assistant do?

Should our digital assistant politely suggest we skip the donuts this week? Should it warn us in graphic detail about the health complications we will likely face down the road if we buy those donuts tonight? Should it go as far as to threaten to lock us out of our favorite mobile games on our phone or withhold our email or some other features for the next few days as punishment if we buy those donuts? Should it quietly send a note to our doctor telling her we bought donuts and asking for advice? Or, should it go as far as to instruct the credit card company to decline the transaction to stop us from buying the donuts?

The Cultural Challenge

Moreover, how should algorithms handle the cultural differences that are inherent to such value decisions? Should a personal assistant of someone living in Saudi Arabia who expresses interest in anti-government protest movements discourage further interest in the topic? Should the assistant of someone living in Thailand censor the person’s communications to edit out criticism of government officials to protect the person from reprisals?

Should an assistant that determines its user is depressed try to cheer that person up by masking negative news and deluging him with the most positive news it can find to try to improve his emotional state? What happens when those decisions are complicated by the desires of advertisers that pay for a particular outcome?

As artificial intelligence develops at an exponential rate, what are the value systems and ethics with which we should imbue our new digital servants?

When algorithms start giving us orders, should they fulfill our innermost desires or should they save us from ourselves?

This is the future of AI.

Source: Forbes

Read more on AI ethics on our post: How To Teach Robots Right and Wrong

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Google Aims To Dominate AI

There are more than 1000 researchers at Google working on these machine intelligence applications

The Search Giant Is Making Its AI Open Source So Anyone Can Use It

Internally, Google has spent the last three years building a massive platform for artificial intelligence and now they’re unleashing it on the world


In November 2007, Google laid the groundwork to dominate the mobile market by releasing Android, an open ­source operating system for phones. Eight years later to the month, Android has an an 80 percent market share, and Google is using the same trick—this time with artificial intelligence.

Introducing TensorFlow,
the Android of AI

Google is announcing TensorFlow, its open ­source platform for machine learning, giving anyone a computer and internet connection (and casual background in deep learning algorithms) access to one of the most powerful machine learning platforms ever created.

More than 50 Google products have adopted TensorFlow to harness deep learning (machine learning using deep neural networks) as a tool, from identifying you and your friends in the Photos app to refining its core search engine. Google has become a machine learning company. Now they’re taking what makes their services special, and giving it to the world.

TensorFlow is a library of files that allows researchers and computer scientists to build systems that break down data, like photos or voice recordings, and have the computer make future decisions based on that information. This is the basis of machine learning: computers understanding data, and then using it to make decisions. When scaled to be very complex, machine learning is a stab at making computers smarter.

But no matter how well a machine may complement or emulate the human brain, it doesn’t mean anything if the average person can’t figure out how to use it. That’s Google’s plan to dominate artificial intelligence—making it simple as possible. While the machinations behind the curtains are complex and dynamic, the end result are ubiquitous tools that work, and the means to improve those tools if you’re so inclined.

Source:  Popular Science

Click here to learn more about Mark Zuckerberg’s vision for Facebook

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Is an Affair in Virtual Reality Still Cheating?

virtual reality sexI hadn’t touched another woman in an intimate way since before getting married six years ago. Then, in the most peculiar circumstances, I was doing it. I was caressing a young woman’s hands. I remember thinking as I was doing it: I don’t even know this person’s name.

After 30 seconds, the experience became too much and I stopped. I ripped off my Oculus Rift headset and stood up from the chair I was sitting on, stunned. It was a powerful experience, and I left convinced that virtual reality was not only the future of sex, but also the future of infidelity.

Whatever happens, the old rules of fidelity are bound to change dramatically. Not because people are more open or closed-minded, but because evolving technology is about the force the issues into our brains with tantalizing 1s and 0s.

Source: Motherboard

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Toyota Invests $1 Billion in Artificial Intelligence Research Center in California

Breaking News, Nov. 6:

Gill Pratt, a roboticist who will oversee Toyota's new research laboratory in the United States, at a news conference Friday in Tokyo. (Yuya Shino/Reuters)

Gill Pratt, a roboticist who will oversee Toyota’s new research laboratory in the United States, at a news conference Friday in Tokyo. (Yuya Shino/Reuters)

Toyota, the Japanese auto giant, on Friday announced a five-year, $1 billion research and development effort headquartered here. As planned, the compound would be one of the largest research laboratories in Silicon Valley.

Conceived as a research facility bridging basic science and commercial engineering, it will be organized as a new company to be named Toyota Research Institute. Toyota will initially have a laboratory adjacent to Stanford University and another near M.I.T. in Cambridge, Mass.

Toyota plans to hire 200 scientists for its artificial intelligence research center.

The new center will initially focus on artificial intelligence and robotics technologies and will explore how humans move both outdoors and indoors, including technologies intended to help the elderly.

When the center begins operating in January, it will prioritize technologies that make driving safer for humans rather than completely replacing them. That approach is in stark contrast with existing research efforts being pursued by Google and Uber to create self-driving cars.

“We want to create cars that are both safer and incredibly fun to drive,” Dr. Pratt said. Rather than completely removing driving from the equation, he described a collection of sensors and software that will serve as a “guardian angel,” protecting human drivers.

In September, when Dr. Pratt joined Toyota, the company announced an initial artificial intelligence research effort committing $50 million in funding to the computer science departments of both Stanford and M.I.T. He said the initiative was intended to turn one of the world’s most successful carmakers into one of the world’s top software developers.

In addition to focusing on navigation technologies, the new research corporation will also apply artificial intelligence technologies to Toyota’s factory automation systems, Dr. Pratt said.

Source: NY Times

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The trauma of telling Siri you’ve been dumped

Of all the ups and downs that I’ve had in my dating life, the most humiliating moment was having to explain to Siri that I got dumped.

burn photo of ex

I found an app called Picture to Burn that aims to digitally reproduce the cathartic act of burning an ex’s photo

“Siri, John isn’t my boyfriend anymore,” I confided to my iPhone, between sobs.

“Do you want me to remember that John is not your boyfriend anymore?” Siri responded, in the stilted, masculine British robot dialect I’d selected in “settings.”

Callously, Siri then prompted me to tap either “yes” or “no.”

I was ultimately disappointed in what technology had to offer when it comes to heartache. This is one of the problems that Silicon Valley doesn’t seem to care about.

The truth is, there isn’t (yet) a quick tech fix for a breakup.

A few months into the relationship I’d asked Siri to remember which of the many Johns* in my contacts was the one I was dating. At the time, divulging this information to Siri seemed like a big step — at long last, we were “Siri Official!” Now, though, we were Siri-Separated. Having to break the news to my iPhone—my non-human, but still intimate companion—surprisingly stung.

Even if you unfollow, unfriend and restrain yourself from the temptation of cyberstalking, our technologies still hold onto traces of our relationships.

Perhaps, in the future, if I tell Siri I’ve just gotten dumped, it will know how to handle things more gently, offering me some sort of pre-programmed comfort, rather than algorithms that constantly surface reminders of the person who is no longer a “favorite” contact in my phone.

Source: Fusion 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Inside the surprisingly sexist world of artificial intelligence

women in aiRight now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords — it’s a startling lack of diversity.

There’s no doubt Stephen Hawking is a smart guy. But the world-famous theoretical physicist recently declared that women leave him stumped.

“Women should remain a mystery,” Hawking wrote in response to a Reddit user’s question about the realm of the unknown that intrigued him most. While Hawking’s remark was meant to be light-hearted, he sounded quite serious discussing the potential dangers of artificial intelligence during Reddit’s online Q&A session:

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble.

Hawking’s comments might seem unrelated. But according to some women at the forefront of computer science, together they point to an unsettling truth. Right now, the real danger in the world of artificial intelligence isn’t the threat of robot overlords—it’s a startling lack of diversity.

I spoke with a few current and emerging female leaders in robotics and artificial intelligence about how a preponderance of white men have shaped the fields—and what schools can do to get more women and minorities involved. Here’s what I learned:

  1. Hawking’s offhand remark about women is indicative of the gender stereotypes that continue to flourish in science.
  2. Fewer women are pursuing careers in artificial intelligence because the field tends to de-emphasize humanistic goals.
  3. There may be a link between the homogeneity of AI researchers and public fears about scientists who lose control of superintelligent machines.
  4. To close the diversity gap, schools need to emphasize the humanistic applications of artificial intelligence.
  5. A number of women scientists are already advancing the range of applications for robotics and artificial intelligence.
  6. Robotics and artificial intelligence don’t just need more women—they need more diversity across the board.

In general, many women are driven by the desire to do work that benefits their communities, desJardins says. Men tend to be more interested in questions about algorithms and mathematical properties.

Since men have come to dominate AI, she says, “research has become very narrowly focused on solving technical problems and not the big questions.”

Source: Quartz

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Enhancing Social Interaction with an AlterEgo Artificial Agent

AlterEgo: Humanoid robotics and Virtual Reality to improve social interactions

The objective of AlterEgo is the creation of an interactive cognitive architecture, implementable in various artificial agents, allowing a continuous interaction with patients suffering from social disorders. The AlterEgo architecture is rooted in complex systems, machine learning and computer vision. The project will produce a new robotic-based clinical method able to enhance social interaction of patients. This new method will change existing therapies, will be applied to a variety of pathologies and will be individualized to each patient. AlterEgo opens the door to a new generation of social artificial agents in service robotics.

Source: European Commission: CORDIS

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Emotionally literate tech to help treat autism

Researchers have found that children with autism spectrum disorders are more responsive to social feedback when it is provided by technological means, rather than a human.

When therapists do work with autistic children, they often use puppets and animated characters to engage them in interactive play. However, researchers believe that small, friendly looking robots could be even more effective, not just to act as a go-between, but because they can learn how to respond to a child’s emotional state and infer his or her intentions.

Children with autistic spectrum disorders prefer to interact with non-human agents, and robots are simpler and more predictive than humans, so can serve as an intermediate step for developing better human-to-human interaction,’ said Professor Bram Vanderborght of Vrije Universiteit Brussel, Belgium.

Researchers have found that children with autism spectrum disorders are more responsive to social feedback when it is provided by technological means, rather than a human,’ said Prof. Vanderborght.

Source: Horizon Magazine

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Meet Pineapple, NKU’s newest artificial intelligence

Pineapple will be used for the next three years for research into social robotics

“Robots are pineapple 1getting more intelligent, more sociable. People are treating robots like humans! People apply humor and social norms to robots,” Dr. [Austin] Lee said. “Even when you think logically there’s no way, no reason, to do that; it’s just a machine without a heart. But because people attach human attributes to robots, I think a robot can be an effective persuader.”

pineapple 2 ausitn lee Anne thompson

Dr. Austin Lee and Anne Thompson with Pineapple the robot

Source: The Northerner

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail
Quote

Geoff Hinton on “AI as a friend”

AI Quotes

GEOFF HINTON – GOOGLE – UNIVERSITY OF TORONTO
geoff hinton

Geoff Hinton – Google, Distinguished Researcher, University of Toronto as a Distinguished Emeritus Professor

He [Geoff Hinton] painted a picture of the near-future in which people will chat with their computers, not only to extract information, but for fun – reminiscent of the film, Her, in which Joaquin Phoenix falls in love with his intelligent operating system.

“It’s not that far-fetched,” Hinton said. “I don’t see why it shouldn’t be like a friend. I don’t see why you shouldn’t grow quite attached to them.

Source: The Guardian – Thursday 21 May 2015

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Owners of first humanoid robot sign agreement not to have sex with it

Pepper agreement

© Yuya Shino / Reuters

A Japanese-based company Softbank, which has created Pepper the robot, has forced customers to sign a document forbidding its owners from using the humanoid for sexual purposes, as well as creating sexy apps.

Even after having paid nearly $2,000 US dollars for the robot, users may have to return Pepper to its makers should they get too personal with the emotional artificial being. 

The clause reads that Pepper must not be used “for sexual activity and actions for the purpose of indecent acts, or acts for the purpose of meeting and dating and making acquaintance of the opposite sex.

Currently Pepper is available for purchase for Japanese residents only and they must be older than 20.

Source: rt.com

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How To Teach Robots Right and Wrong

Artificial Moral Agents

Prof.-Nayef-Al-Rodhan_gallerylarge

Nayef Al-Rodhan

Over the years, robots have become smarter and more autonomous, but so far they still lack an essential feature: the capacity for moral reasoning. This limits their ability to make good decisions in complex situations.

The inevitable next step, therefore, would seem to be the design of artificial moral agents,” a term for intelligent systems endowed with moral reasoning that are able to interact with humans as partners. In contrast with software programs, which function as tools, artificial agents have various degrees of autonomy.

However, robot morality is not simply a binary variable. In their seminal work Moral Machines, Yale’s Wendell Wallach and Indiana University’s Colin Allen analyze different gradations of the ethical sensitivity of robots. They distinguish between operational morality and functional morality. Operational morality refers to situations and possible responses that have been entirely anticipated and precoded by the designer of the robot system. This could include the profiling of an enemy combatant by age or physical appearance.

The most critical of these dilemmas is the question of whose morality robots will inherit.

Functional morality involves robot responses to scenarios unanticipated by the programmer, where the robot will need some ability to make ethical decisions alone. Here, they write, robots are endowed with the capacity to assess and respond to “morally significant aspects of their own actions.” This is a much greater challenge.

The attempt to develop moral robots faces a host of technical obstacles, but, more important, it also opens a Pandora’s box of ethical dilemmas.

Moral values differ greatly from individual to individual, across national, religious, and ideological boundaries, and are highly dependent on contextEven within any single category, these values develop and evolve over time.

Uncertainty over which moral framework to choose underlies the difficulty and limitations of ascribing moral values to artificial systems … To implement either of these frameworks effectively, a robot would need to be equipped with an almost impossible amount of information. Even beyond the issue of a robot’s decision-making process, the specific issue of cultural relativism remains difficult to resolve: no one set of standards and guidelines for a robot’s choices exists.    

For the time being, most questions of relativism are being set aside for two reasons. First, the U.S. military remains the chief patron of artificial intelligence for military applications and Silicon Valley for other applications. As such, American interpretations of morality, with its emphasis on freedom and responsibility, will remain the default.

Source: Foreign Affairs The Moral Code, August 12, 2015

PL – EXCELLENT summary of a very complex, delicate but critical issue Professor Al-Rodhan!

In our work we propose an essential activity in the process of moralizing AI that is being overlooked. An approach that facilitates what you put so well, for “AI to interact with humans as partners.”

We question the possibility that binary-coded AI/logic-based AI, in its current form, will one day switch from amoral to moral. This would first require a universal agreement of what constitutes morals, and secondarily, it would require the successful upload/integration of morals or moral capacity into AI computing. 

We do think AI can be taught “culturally relevant” moral reasoning though, by implementing a new human/AI interface that includes a collaborative engagement protocol. A protocol that makes it possible for AI to interact with the person in a way that the AI learns what is culturally relevant to each person, individually. AI that learns the values/morals of the individual and then interacts with the individual based on what was learned.

We call this a “whole person” engagement protocol. This person-focused approach includes AI/human interaction that embraces quantum cognition as a way of understanding what appears to be human irrationality. [Behavior and choices of which, from a classical probability-based decision model, are judged to be irrational and cannot be computed.]

This whole person approach, has a different purpose, and can produce different outcomes, than current omniscient/clandestine-style methods of AI/human information-gathering that are more like spying then collaborating, since the human’s awareness of self and situation is not advanced, but rather, is only benefited as it relates to things to buy, places to go and schedules to meet. 

Visualization is a critical component for AI to engage the whole person. In this case, a visual that displays interlinking data for the human. That breaks through the limitations of human working memory by displaying complex data of a person/situation in context. That incorporates a human‘s most basic reliable two ways of know, big picture and details, that have to be kept in dialogue with one another. Which makes it possible for the person themselves to make meaning, decide and act, in real-time. [The value of visualization was demonstrated in 2013 in physics with the discovery of the Amplituhedron. It replaced 500 pages of algebra formulas in one simple visual, thus reducing overwhelm related to linear processing.]        

This kind of collaborative engagement between AI and humans (even groups of humans) sets the stage for AI to offer real-time personalized feedback for/about the individual or group. It can put the individual in the driver’s seat of his/her life as it relates to self and situation. It makes it possible for humans to navigate any kind of complex human situation such as, for instance, personal growth, relationships, child rearing, health, career, company issues, community issues, conflicts, etc … (In simpler terms, what we refer to as the “tough human stuff.”)

AI could then address human behavior, which, up to now, has been the elephant in the room for coders and AI developers.

We recognize that this model for AI / human interaction does not solve the ultimate AI morals/values dilemma. But it could serve to advance four major areas of this discussion:

  1. By feeding back morals/values data to individual humans, it could advance their own awareness more quickly. (The act of seeing complex contextual data expands consciousness for humans and makes it possible for them to shift and grow.)
  2. It would help humans help themselves right now (not 10 or 20 years from now).
  3. It would create a new class of data, perceptual data, as it relates to individual beliefs that drive human behavior.
  4. It would allow for AI to process this additional “perceptual” data, collectively over time, to become a form of “artificial moral agent” with enhanced “moral reasoning” “working in partnership with humans.

Click here to leave a comment at the end of this post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Internet to create a free teen-friendly hub where students can access info in one place

“I began my career as a high school teacher in the Bronx at a 5,000-student high school that’s since been shut down for chronic low-performance. That experience helped me understand how alone so many young people are as they are trying to figure out their future. Their parents are busy, their friends are worried about their own issues, and often they don’t have a teacher or other adult who is there to guide them,” Executive Director of GetSchooled.com Marie Groark

PLDid you know the average high school student spends less than one hour per school year with a guidance counselor mulling over college decisions? This, according to the National Association for College Admission Counseling.

Not only is this not nearly enough time to make decisions that can impact the rest of their lives, but for kids whose families can’t afford college prep, that might be their only interaction with someone equipped to steer them toward higher education.

Get Schooled has turned to the Internet to create a free teen-friendly hub where students can access relevant info in one place, from how to find and apply for scholarships to info on standardized tests to what type of school fits their personality. They cut the boredom factor with celebrity interviews and a gamification model that awards students points as they engage, redeemable for offline rewards.

We believe a role for AI, as a next step in this expanding opportunity, is to engage and collaborate with students individually about their own life and future. Get to know the unique perspective and situation of each student. To guide the student in what they personally need precisely when they need it. Equipping them with information tailored to their own personal journey. 

Source: Fast Company

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why artificial intelligence could be the ultimate prize

five logos

The five biggest technology companies in the Western world are each competing to create their own virtual assistants; your personal guides to help navigate the digital world.

Facebook recently announced a concierge service called “M through its Messenger app, and most people have already played with Apple’s Siri (which got a big upgrade last week for the new Apple TV).

Add to that Google Now, Microsoft’s Cortana and Amazon, which has the Echo – a voice-activated living-room device that can control the ambience of your home – and the stage is set for a showdown.

You will be asking your Siri or Cortana to order food, book flights, make restaurant bookings, call a cab, have your car repaired, call Ryanair customer service and buy everything. It’s the super-charged, super-lucrative Search 2.0.

What this means in practice is that services will become proactive: your virtual assistant learns more about you and it will start to tell you what you need, without you having to ask.

So what’s in it for the companies?

Eventually, the virtual assistant that wins – and the company behind it – will know you better than you know yourself, so you can’t live life without it. That’s the ultimate prize.

Source: stuff.co.nz

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Human-robot: A new kind of love?

Intimate artificial intelligence

If a robot could be built to be as sensitive and caring as humans can be, would you want one? They could enter our lives so totally that we might even fall in love with them.

It’s time to think about robots: what they can do for us and what they might mean to us before we get in too deep.

BBC new kind of love

Today the idea of someone loving a robot may seem strange or even utterly wrong. Yet over history, opinions of what are morally acceptable actions and what are not have changed constantly. There may be no reason to think our attitude to loving an artificial intelligence will be any different.

Source: BBC

PL – Here are more blog posts with different perspectives on the topic: Wider debate around sex robots encouraged.
Sex Dolls with Artificial Intelligence to Ease Your
Loneliness?
SIRI-OUSLY: Sex Robots are actually going to be good for humanity 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Introducing Watson’s Discovery Advisor

Watch how IBM Watson is making connections for humans in healthcare,  law enforcement, finance, retail, government, manufacturing, energy and education. They are forging new partnerships between humans and computers to enhance, scale and accelerate human expertise.

Quote from the video: “The next great innovations will come from people who are able to make the connections that others cannot put together.”

PL – These advances are great! But we are particularly interested in ways that Watson can help humans with tough human issues: The inter and intra – personal space, as we the bloggers of this site, like to call it.  

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI to the rescue? Following doctors orders is exhausting, time-consuming

Did you know that patients with type 2 diabetes should spend 143 minutes per day taking care of themselves if they are to follow every doctors’ orders?

It’s called burden of treatment. It’s a tough reality in healthcare today. It involves illnesses of all kinds. It’s the burden of treatment on the patient, on his or her family and friends, and the doctors who care for them. It involves increased pressures and anxiety, financial strains, and additional demands on time for doctor visits, tests and trips to the pharmacy.  And many patients fail to handle this.

The current method of discovery is “conversation.” But, says Dr. Victor Montori, of the Mayo Clinic, “We need a different way of practicing medicine for patients.”

“I do not think that change will come quietly,” Dr. Montori says, “I am focused on a patient revolution led by patients, in partnership with health professionals, to make healthcare primarily about the welfare of patients.

Phil Lawson: The current method of discovery is “conversation?” Who has the time to do that well, these days? When tweets and “likes” are common forms of communication.

We’ve created planes, trains and automobiles to transport our bodies farther, faster. We’ve created tech to connect us faster to the “things” we want to buy. But we have yet to create faster, better ways for our brains to process complex human scenarios — to help us overcome the 7 things barrier of working memory; to help us connect the dots in life, work, the world.

It’s time for tech to go where no tech has gone before.

Currently, IBM’s Watson is making great strides in diagnosis and treatment for patients, but AI must go deeper. It must get personal. This requires a different kind of approach to coding. A moving beyond omniscient programming. It must involve AI to human collaboration.

Below is an example of a well-being application of our behavior growth tech that could be customized to meet the burden of treatment challenge and how AI can add value.

For more info on this approach see Spherit.com


FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail