Sundar Pichai says the future of Google is AI. But can he fix the algorithm?

I was asking in the context of the aftermath of the 2016 election and the misinformation that companies like Facebook, Twitter, and Google were found to have spread.

“I view it as a big responsibility to get it right,” he says. “I think we’ll be able to do these things better over time. But I think the answer to your question, the short answer and the only answer, is we feel huge responsibility.”

But it’s worth questioning whether Google’s systems are making the rightdecisions, even as they make some decisions much easier.

People are already skittish about how much Google knows about them, and they are unclear on how to manage their privacy settings. Pichai thinks that’s another one of those problems that AI could fix, “heuristically.”

“Down the line, the system can be much more sophisticated about understanding what is sensitive for users, because it understands context better,” Pichai says. “[It should be] treating health-related information very differently from looking for restaurants to eat with friends.” Instead of asking users to sift through a “giant list of checkboxes,” a user interface driven by AI could make it easier to manage.

Of course, what’s good for users versus what’s good for Google versus what’s good for the other business that rely on Google’s data is a tricky question. And it’s one that AI alone can’t solve. Google is responsible for those choices, whether they’re made by people or robots.

The amount of scrutiny companies like Facebook and Google — and Google’s YouTube division — face over presenting inaccurate or outright manipulative information is growing every day, and for good reason.

Pichai thinks that Google’s basic approach for search can also be used for surfacing good, trustworthy content in the feed. “We can still use the same core principles we use in ranking around authoritativeness, trust, reputation.

What he’s less sure about, however, is what to do beyond the realm of factual information — with genuine opinion: “I think the issue we all grapple with is how do you deal with the areas where people don’t agree or the subject areas get tougher?”

When it comes to presenting opinions on its feed, Pichai wonders if Google could “bring a better perspective, rather than just ranking alone. … Those are early areas of exploration for us, but I think we could do better there.”

Source: The Verge



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The idea was to help you and I make better decisions amid cognitive overload

IBM Chairman, President, and Chief Executive Officer Ginni Rometty. PHOTOGRAPHER: STEPHANIE SINCLAIR FOR BLOOMBERG BUSINESSWEEK

If I considered the initials AI, I would have preferred augmented intelligence.

It’s the idea that each of us are going to need help on all important decisions.

A study said on average that a third of your decisions are really great decisions, a third are not optimal, and a third are just wrong. We’ve estimated the market is $2 billion for tools to make better decisions.

That’s what led us all to really calling it cognitive

“Look, we really think this is about man and machine, not man vs. machine. This is an era—really, an era that will play out for decades in front of us.”

We set out to build an AI platform for business.

AI would be vertical. You would train it to know medicine. You would train it to know underwriting of insurance. You would train it to know financial crimes. Train it to know oncology. Train it to know weather. And it isn’t just about billions of data points. In the regulatory world, there aren’t billions of data points. You need to train and interpret something with small amounts of data.

This is really another key point about professional AI. Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.”

What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

It’s our responsibility if we build this stuff to guide it safely into the world.

Source: Bloomberg



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

IBM Watson CTO on Why Augmented Intelligence Beats AI

If you look at almost every other tool that has ever been created, our tools tend to be most valuable when they’re amplifying us, when they’re extending our reach, when they’re increasing our strength, when they’re allowing us to do things that we can’t do by ourselves as human beings. That’s really the way that we need to be thinking about AI as well, and to the extent that we actually call it augmented intelligence, not artificial intelligence.

Some time ago we realized that this thing called cognitive computing was really bigger than us, it was bigger than IBM, it was bigger than any one vendor in the industry, it was bigger than any of the one or two different solution areas that we were going to be focused on, and we had to open it up, which is when we shifted from focusing on solutions to really dealing with more of a platform of services, where each service really is individually focused on a different part of the problem space.

what we’re talking about now are a set of services, each of which do something very specific, each of which are trying to deal with a different part of our human experience, and with the idea that anybody building an application, anybody that wants to solve a social or consumer or business problem can do that by taking our services, then composing that into an application.

If the doctor can now make decisions that are more informed, that are based on real evidence, that are supported by the latest facts in science, that are more tailored and specific to the individual patient, it allows them to actually do their job better. For radiologists, it may allow them to see things in the image that they might otherwise miss or get overwhelmed by. It’s not about replacing them. It’s about helping them do their job better.

That’s really the way to think about this stuff, is that it will have its greatest utility when it is allowing us to do what we do better than we could by ourselves, when the combination of the human and the tool together are greater than either one of them would’ve been by theirselves. That’s really the way we think about it. That’s how we’re evolving the technology. That’s where the economic utility is going to be.

There are lots of things that we as human beings are good at. There’s also a lot of things that we’re not very good, and that’s I think where cognitive computing really starts to make a huge difference, is when it’s able to bridge that distance to make up that gap

A way I like to say it is it doesn’t do our thinking for us, it does our research for us so we can do our thinking better, and that’s true of us as end users and it’s true of advisors.

Source: PCMag



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

80% of what human physicians currently do will soon be done instead by technology, allowing physicians to

Data-driven AI technologies are well suited to address chronic inefficiencies in health markets, potentially lowering costs by hundreds of billions of dollars, while simultaneously reducing the time burden on physicians.

These technologies can be leveraged to capture the massive volume of data that describes a patient’s past and present state, project potential future states, analyze that data in real time, assist in reasoning about the best way to achieve patient and physician goals, and provide both patient and physician constant real-time support. Only AI can fulfill such a mission. There is no other solution.

Technologist and investor Vinod Khosla posited that 80 percent of what human physicians currently do will soon be done instead by technology, allowing physicians to focus their time on the really important elements of patient physician interaction.

Within five years, the healthcare sector has the potential to undergo a complete metamorphosis courtesy of breakthrough AI technologies. Here are just a few examples:

1. Physicians will practice with AI virtual assistants (using, for example, software tools similar to Apple’s Siri, but specialized to the specific healthcare application).

2. Physicians with AI virtual assistants will be able to treat 5X – 10X as many patients with chronic illnesses as they do today, with better outcomes than in the past.

Patients will have a constant “friend” providing a digital health conscience to advise, support, and even encourage them to make healthy choices and pursue a healthy lifestyle.

3. AI virtual assistants will support both patients and healthy individuals in health maintenance with ongoing and real-time intelligent advice.

Our greatest opportunity for AI-enhancement in the sector is keeping people healthy, rather than waiting to treat them when they are sick. AI virtual assistants will be able to acquire deep knowledge of diet, exercise, medications, emotional and mental state, and more.

4. Medical devices previously only available in hospitals will be available in the home, enabling much more precise and timely monitoring and leading to a healthier population.

5. Affordable new tools for diagnosis and treatment of illnesses will emerge based on data collected from extant and widely adopted digital devices such as smartphones.

6. Robotics and in-home AI systems will assist patients with independent living.

But don’t be misled — the best metaphor is that they are learning like humans learn and that they are in their infancy, just starting to crawl. Healthcare AI virtual assistants will soon be able to walk, and then run.

Many of today’s familiar AI engines, personified in Siri, Cortana, Alexa, Google Assistant or any of the hundreds of “intelligent chatbots,” are still immature and their capabilities are highly limited. Within the next few years they will be conversational, they will learn from the user, they will maintain context, and they will provide proactive assistance, just to name a few of their emerging capabilities.

And with these capabilities applied in the health sector, they will enable us to keep millions of citizens healthier, give physicians the support and time they need to practice, and save trillions of dollars in healthcare costs. Welcome to the age of AI.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

What if a Computer Could Help You with Psychotherapy, Alter Your Habits? #AI

TAO Connect

One of the pioneers in this space has been Australia’s MoodGYM, first launched in 2001. It now has over 1 million users around the world and has been the subject of over two dozen randomized clinical research trials showing that this inexpensive (or free!) intervention can work wonders on depression, for those who can stick with it. And online therapy has been available since 1996.

TAO Connect — the TAO stands for “therapist assisted online” — is something a little different than MoodGYM. Instead of simply walking a user through a serious of psychoeducational modules (which vary in their interactivity and information presentation), it uses multiple modalities and machine learning (a form of artificial intelligence) to try and help more effectively teach the techniques that can keep anxiety at bay for the rest of your life. It can be used for anxiety, depression, stress, and pain management, and can help a person with relationship problems and learning greater resiliency in dealing with stress.

TAO Connect is based on the Stepped Care model of treatment delivery, offering more intensive and more of a variety of treatment options depending upon the severity of mental illness a person presents with. It is a model used elsewhere in the world, but has traditionally not been used as often in the U.S. (except in resource-constrained clinics, like university counseling centers).

Today, TAO Connect is only available through a therapist whose practice subscribes to the service.

Source: PsychCentral

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A blueprint for coexistence with #AI

In September 2013, I was diagnosed with fourth-stage lymphoma.

This near-death experience has not only changed my life and priorities, but also altered my view of artificial intelligence—the field that captured my selfish attention for all those years.

This personal reformation gave me an enlightened view of what AI should mean for humanity. Many of the recent discussions about AI have concluded that this scientific advance will likely take over the world, dominate humans, and end poorly for mankind.

But my near-death experience has enabled me to envision an alternate ending to the AI story—one that makes the most of this amazing technology while empowering humans not just to survive, but to thrive.

Love is what is missing from machines. That’s why we must pair up with them, to leaven their powers with what only we humans can provide. Your future AI diagnostic tool may well be 10 times more accurate than human doctors, but patients will not want a cold pronouncement from the tool: “You have fourth stage lymphoma and a 70 percent likelihood of dying within five years.” That in itself would be harmful.

Kai-Fu Lee. DAVID PAUL MORRIS/ BLOOMBERG

Patients would benefit, in health and heart, from a “doctor of love” who will spend as much time as the patient needs, always be available to discuss their case, and who will even visit the patients at home. This doctor might encourage us by sharing stories such as, “Kai-Fu had the same lymphoma, and he survived, so you can too.”

This kind of “doctor of love” would not only make us feel better and give us greater confidence, but would also trigger a placebo effect that would increase our likelihood of recuperation. Meanwhile, the AI tool would watch the Q&A between the “doctor of love” and the patient carefully, and then optimize the treatment. If scaled across the world, the number of “doctors of love” would greatly outnumber today’s doctors.

Let us choose to let machines be machines, and let humans be humans. Let us choose to use our machines, and love one another.

Kai-Fu Lee, Ph.D., is the Founder and CEO of Sinovation Ventures and the president of its Artificial Intelligence Institute.

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The holy grail is modifying patients’ behavior – #AI

Companies like DexCom are focused on the diabetes epidemic, Jimenez said

the holy grail is modifying patients’ behavior.

That would mean combining the stream of data from glucose monitoring, insulin measurements, patient activity and meals, and applying machine learning to derive insights so the software can send alerts and recommendations back to patients and their doctors, she said.

“But where we are in our maturity as an industry is just publishing numbers,”

Jimenez explained. “So we’re just telling people what their glucose number is, which is critical for a type 1 diabetic. But a type 2 diabetic needs to engage with an app, and be compelled to interact with the insights. It’s really all about the development of the app.”

The ultimate goal, perhaps, would be to develop a user interface that uses the insights gained from machine learning to actually prompt diabetic patients to change their behavior.

This point was echoed by Jean Balgrosky, an investor who spent 20 years as the CIO of large, complex healthcare organizations such as San Diego’s Scripps Health. “At the end of the day,” she said, “all this machine learning has to be absorbed and consumed by humans—to take care of humans in healthcare.”

Source: Xconomy

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence Key To Treating Illness

UC and one of its graduates have teamed up to use artificial intelligence to analyze the fMRIs of bipolar patients to determine treatment.

In a proof of concept study, Dr. Nick Ernest harnessed the power of his Psibernetix AI program to determine if bipolar patients could benefit from a certain medication. Using fMRIs of bipolar patients, the software looked at how each patient would react to lithium.

Fuzzy Logic appears to be very accurate

The computer software predicted with 100 percent accuracy how patients would respond. It also predicted the actual reduction in manic symptoms after the lithium treatment with 92 percent accuracy.

UC psychiatrist David Fleck partnered with Ernest and Dr. Kelly Cohen on the study. Fleck says without AI, coming up with a treatment plan is difficult. “Bipolar disorder is a very complex genetic disease. There are multiple genes and not only are there multiple genes, not all of which we understand and know how they work, there is interaction with the environment.

Ernest emphasizes the advanced software is more than a black box. It thinks in linguistic sentences. “So at the end of the day we can go in and ask the thing why did you make the prediction that you did? So it has high accuracy but also the benefit of explaining exactly why it makes the decision that it did.”

More tests are needed to make sure the artificial intelligence continues to accurately predict medication for bipolar patients.

Source: WVXU

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

This adorable #chatbot wants to talk about your mental health

Research conducted by the federal government in 2015 found that only 41 percent of U.S. adults with a mental health condition in the previous year had gotten treatment. That dismal treatment rate has to do with cost, logistics, stigma, and being poorly matched with a professional.

Chatbots are meant to remove or diminish these barriers. Creators of mobile apps for depression and anxiety, among other mental health conditions, have argued the same thing, but research found that very few of the apps are based on rigorous science or are even tested to see if they work. 

That’s why Alison Darcy, a clinical psychologist at Stanford University and CEO and founder of Woebot wants to set a higher standard for chatbots. Darcy co-authored a small study published this week in the Journal of Medical Internet Research that demonstrated Woebot can reduce symptoms of depression in two weeks.

Woebot presumably does this in part by drawing on techniques from cognitive behavioral therapy (CBT), an effective form of therapy that focuses on understanding the relationship between thoughts and behavior. He’s not there to heal trauma or old psychological wounds. 

“We don’t make great claims about this technology,” Darcy says. “The secret sauce is how thoughtful [Woebot] is as a CBT therapist. He has a set of core principles that override everything he does.” 

His personality is also partly modeled on a charming combination of Spock and Kermit the Frog.

Jonathan Gratch, director for virtual human research at the USC Institute for Creative Technologies, has studied customer service chatbots extensively and is skeptical of the idea that one could effectively intuit our emotional well-being.  

“State-of-the-art natural language processing is getting increasingly good at individual words, but not really deeply understanding what you’re saying,” he says.

The risk of using a chatbot for your mental health is manifold, Gratch adds.

Darcy acknowledges Woebot’s limitations. He’s only for those 18 and over. If your mood hasn’t improved after six weeks of exchanges, he’ll prompt you to talk about getting a “higher level of care.” Upon seeing signs of suicidal thoughts or behavior, Woebot will provide information for crisis phone, text, and app resources. The best way to describe Woebot, Darcy says, is probably as “gateway therapy.”

“I have to believe that applications like this can address a lot of people’s needs.”

Source: Mashable

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

These chatbots may one day even replace your doctor

As artificial intelligence programs learn to better communicate with humans, they’ll soon encroach on careers once considered untouchable, like law and accounting.

These chatbots may one day even replace your doctor.

This January, the United Kingdom’s National Health Service launched a trial with Babylon Health, a startup developing an AI chatbot. 

The bot’s goal is the same as the helpline, only without humans: to avoid unnecessary doctor appointments and help patients with over-the-counter remedies.

Using the system, patients chat with the bot about their symptoms, and the app determines whether they should see a doctor, go to a pharmacy, or stay home. It’s now available to about 1.2 million Londoners.

But the upcoming version of Babylon’s chatbot can do even more: In tests, it’s now dianosing patients faster human doctors can, says Dr. Ali Parsa, the company’s CEO. The technology can accurately diagnose about 80 percent of illnesses commonly seen by primary care doctors.

The reason these chatbots are increasingly important is cost: two-thirds of money moving through the U.K.’s health system goes to salaries.

“Human beings are very expensive,” Parsa says. “If we want to make healthcare affordable and accessible for everyone, we’ll need to attack the root causes.”

Globally, there are 5 million fewer doctors today than needed, so anything that lets a doctor do their jobs faster and more easily will be welcome, Parsa says.

Half the world’s population has little access to health care — but they have smartphones. Chatbots could get them the help they need.

Source: NBC News

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Our minds need medical attention, AI may be able to help there

AI could be useful for more than just developing Siri; it may bring about a new, smarter age of healthcare.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

For instance, a team of American researchers used AI to aid detection of autism in babies as young as six months1. This is crucial because the first two years of life see the most neural plasticity when the abnormalities associated with autism haven’t yet fully settled in. This means that earlier intervention is better, especially when many autistic babies are diagnosed at 24 months.

While previous algorithms exist for detecting autism’s development using behavioral data, they have not been effective enough to be clinically useful. This team of researchers sought to improve on these attempts by employing deep learning. Their algorithm successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old. Their system processed images of the babies’ cortical surface area, which grows too rapidly in developing autism. This smarter algorithm predicted autism so well that clinicians may now want to adopt it.

But human ailments aren’t just physical; our minds need medical attention, too. AI may be able to help there as well.

Facebook is beginning to use AI to identify users who may be at risk of suicide, and a startup company just built an AI therapist apparently capable of offering mental health services to anyone with an internet connection.

Source: Machine Design

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Is Your Doctor Stumped? There’s a Chatbot for That

Doctors have created a chatbot to revolutionize communication within hospitals using artificial intelligence … basically a cyber-radiologist in app form, can quickly and accurately provide specialized information to non-radiologists. And, like all good A.I., it’s constantly learning.

Traditionally, interdepartmental communication in hospitals is a hassle. A clinician’s assistant or nurse practitioner with a radiology question would need to get a specialist on the phone, which can take time and risks miscommunication. But using the app, non-radiologists can plug in common technical questions and receive an accurate response instantly.

“Say a patient has a creatinine [lab test to see how well the kidneys are working]” co-author and application programmer Kevin Seals tells Inverse. “You send a message, like you’re texting with a human radiologist. ‘My patient is a 5.6, can they get a CT scan with contrast?’ A lot of this is pretty routine questions that are easily automated with software, but there’s no good tool for doing that now.”

In about a month, the team plans to make the chatbot available to everyone at UCLA’s Ronald Reagan Medical Center, see how that plays out, and scale up from there. Your doctor may never be stumped again.

Source: Inverse

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI is driving the real health care transformation

AI and machine learning are forcing dramatic business model change for all the stakeholders in the health care system.

What does AI (and machine learning) mean in the health care context?

What is the best way to treat a specific patient given her health and sociological context?

What is a fair price for a new drug or device given its impact on health outcomes?

And how can long-term health challenges such as cancer, obesity, heart disease, and other conditions be managed?

the realization that treating “the whole patient” — not just isolated conditions, but attempting to improve the overall welfare of patients who often suffer from multiple health challenges — is the new definition of success, which means predictive insights are paramount.

Answering these questions is the holy grail of medicine — the path toward an entirely new system that predicts disease and delivers personalized health and wellness services to entire populations. And this change is far more important for patients and society alike than the debate now taking place in Washington.

Those who succeed in this new world will also do one other thing: They will see AI and machine learning not as a new tool, but as a whole new way of thinking about their business model.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Teaching an Algorithm to Understand Right and Wrong

hbr-ai-morals

Aristotle states that it is a fact that “all knowledge and every pursuit aims at some good,” but then continues, “What then do we mean by the good?” That, in essence, encapsulates the ethical dilemma.

We all agree that we should be good and just, but it’s much harder to decide what that entails.

“We need to decide to what extent the legal principles that we use to regulate humans can be used for machines. There is a great potential for machines to alert us to bias. We need to not only train our algorithms but also be open to the possibility that they can teach us about ourselves.” – Francesca Rossi, an AI researcher at IBM

Since Aristotle’s time, the questions he raised have been continually discussed and debated. 

Today, as we enter a “cognitive era” of thinking machines, the problem of what should guide our actions is gaining newfound importance. If we find it so difficult to denote the principles by which a person should act justly and wisely, then how are we to encode them within the artificial intelligences we are creating? It is a question that we need to come up with answers for soon.

Cultural Norms vs. Moral Values

Another issue that we will have to contend with is that we will have to decide not only what ethical principles to encode in artificial intelligences but also how they are coded. As noted above, for the most part, “Thou shalt not kill” is a strict principle. Other than a few rare cases, such as the Secret Service or a soldier, it’s more like a preference that is greatly affected by context.

What makes one thing a moral value and another a cultural norm? Well, that’s a tough question for even the most-lauded human ethicists, but we will need to code those decisions into our algorithms. In some cases, there will be strict principles; in others, merely preferences based on context. For some tasks, algorithms will need to be coded differently according to what jurisdiction they operate in.

Setting a Higher Standard

Most AI experts I’ve spoken to think that we will need to set higher moral standards for artificial intelligences than we do for humans.

Major industry players, such as Google, IBM, Amazon, and Facebook, recently set up a partnership to create an open platform between leading AI companies and stakeholders in academia, government, and industry to advance understanding and promote best practices. Yet that is merely a starting point.

Source: Harvard Business Review

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

MIT makes breakthrough in morality-proofing artificial intelligence

mit-morality-breakthroughResearchers at MIT are investigating ways of making artificial neural networks more transparent in their decision-making.

As they stand now, artificial neural networks are a wonderful tool for discerning patterns and making predictions. But they also have the drawback of not being terribly transparent. The beauty of an artificial neural network is its ability to sift through heaps of data and find structure within the noise.

This is not dissimilar from the way we might look up at clouds and see faces amidst their patterns. And just as we might have trouble explaining to someone why a face jumped out at us from the wispy trails of a cirrus cloud formation, artificial neural networks are not explicitly designed to reveal what particular elements of the data prompted them to decide a certain pattern was at work and make predictions based upon it.

We tend to want a little more explanation when human lives hang in the balance — for instance, if an artificial neural net has just diagnosed someone with a life-threatening form of cancer and recommends a dangerous procedure. At that point, we would likely want to know what features of the person’s medical workup tipped the algorithm in favor of its diagnosis.

MIT researchers Lei, Barzilay, and Jaakkola designed a neural network that would be forced to provide explanations for why it reached a certain conclusion.

Source: Extremetech

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Why we can’t trust ‘blind big data’ to cure the world’s diseases

1020Once upon a time a former editor of WIRED, Chris Anderson, … envisaged how scientists would take the ever expanding ocean of data, send a torrent of bits and bytes into a great hopper, then crank the handles of huge computers that run powerful statistical algorithms to discern patterns where science cannot.

In short, Anderson dreamt of the day when scientists no longer had to think.

Eight years later, the deluge is truly upon us. Some 90 percent of the data currently in the world was created in the last two years … and there are high hopes that big data will pave the way for a revolution in medicine.

But we need big thinking more than ever before.

Today’s data sets, though bigger than ever, still afford us an impoverished view of living things.

It takes a bewildering amount of data to capture the complexities of life.

The usual response is to put faith in machine learning, such as artificial neural networks. But no matter their ‘depth’ and sophistication, these methods merely fit curves to available data.

we do not predict tomorrow’s weather by averaging historic records of that day’s weather

… here are other limitations, not least that data are not always reliable (“most published research findings are false,” as famously reported by John Ioannidis in PLOS Medicine). Bodies are dynamic and ever-changing, while datasets often only give snapshots, and are always retrospective.

Source: Wired

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Civil Rights and Big Data

big-data-whitehouse-reportBlogger’s note: We’ve posted several articles on the bias and prejudice inherent in big data, which with machine learning results in “machine prejudice,” all of which impacts humans when they interact with intelligent agents. 

Apparently, as far back as May 2014, the Executive Office of the President started issuing reports on the potential in “Algorithmic Systems” for “encoding discrimination in automated decisions”. The most recent report of May 2016 addressed two additional challenges:

1) Challenges relating to data used as inputs to an algorithm;

2) Challenges related to the inner workings of the algorithm itself.

Here are two excerpts:

The Obama Administration’s Big Data Working Group released reports on May 1, 2014 and February 5, 2015. These reports surveyed the use of data in the public and private sectors and analyzed opportunities for technological innovation as well as privacy challenges. One important social justice concern the 2014 report highlighted was “the potential of encoding discrimination in automated decisions”—that is, that discrimination may “be the inadvertent outcome of the way big data technologies are structured and used.”

To avoid exacerbating biases by encoding them into technological systems, we need to develop a principle of “equal opportunity by design”—designing data systems that promote fairness and safeguard against discrimination from the first step of the engineering process and continuing throughout their lifespan.

Download the report here: Whitehouse.gov

References:

https://www.whitehouse.gov/blog/2016/10/12/administrations-report-future-artificial-intelligence

http://www.frontiersconference.org/

 

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

“Big data need big theory too”

This published paper written by Peter V. Coveney, Edward R. Dougherty, Roger R. Highfield

Abstractroyal-society-2


The current interest in big data, machine learning and data analytics has generated the widespread impression that such methods are capable of solving most problems without the need for conventional scientific methods of inquiry.
Interest in these methods is intensifying, accelerated by the ease with which digitized data can be acquired in virtually all fields of endeavour, from science, healthcare and cybersecurity to economics, social sciences and the humanities. In multiscale modelling, machine learning appears to provide a shortcut to reveal correlations of arbitrary complexity between processes at the atomic, molecular, meso- and macroscales.

Here, we point out the weaknesses of pure big data approaches with particular focus on biology and medicine, which fail to provide conceptual accounts for the processes to which they are applied. No matter their ‘depth’ and the sophistication of data-driven methods, such as artificial neural nets, in the end they merely fit curves to existing data.

Not only do these methods invariably require far larger quantities of data than anticipated by big data aficionados in order to produce statistically reliable results, but they can also fail in circumstances beyond the range of the data used to train them because they are not designed to model the structural characteristics of the underlying system. We argue that it is vital to use theory as a guide to experimental design for maximal efficiency of data collection and to produce reliable predictive models and conceptual knowledge. Rather than continuing to fund, pursue and promote ‘blind’ big data projects with massive budgets, we call for more funding to be allocated to the elucidation of the multiscale and stochastic processes controlling the behaviour of complex systems, including those of life, medicine and healthcare.

Source: The Royal Society Publishing

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

How Artificial intelligence is becoming ubiquitous #AI

“I think the medical domain is set for a revolution.”

AI will make it possible to have a “personal companion” able to assist you through life.

I think one of the most exciting prospects is the idea of a digital agent, something that can act on our behalf, almost become like a personal companion and that can do many things for us. For example, at the moment, we have to deal with this tremendous complexity of dealing with so many different services and applications, and the digital world feels as if it’s becoming ever more complex,” Bishop told CNBC.

“I think artificial intelligence is probably the biggest transformation in the IT industry. Medical is such a big area in terms of GDP that that’s got to be a good bet,” Christopher Bishop, lab director at Microsoft Research in Cambridge, U.K., told CNBC in a TV interview.

” … imagine an agent that can act on your behalf and be the interface between you and that very complex digital world, and furthermore one that would grow with you, and be a very personalized agent, that would understand you and your needs and your experience and so on in great depth.

Source: CNBC

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Machines can never be as wise as human beings – Jack Ma #AI

zuckerger and jack ma

“I think machines will be stronger than human beings, machines will be smarter than human beings, but machines can never be as wise as human beings.”

The wisdom, soul and heart are what human beings have. A machine can never enjoy the feelings of success, friendship and love. We should use the machine in an innovative way to solve human problems.” – Jack Ma, Founder of Alibaba Group, China’s largest online marketplace

Mark Zuckerberg said AI technology could prove useful in areas such as medicine and hands-free driving, but it was hard to teach computers common sense. Humans had the ability to learn and apply that knowledge to problem-solving, but computers could not do that.

AI won’t outstrip mankind that soon – MZ

Source: South China Morning Post

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Will human therapists go the way of the Dodo?

ai therapist

An increasing number of patients are using technology for a quick fix. Photographed by Mikael Jansson, Vogue, March 2016

PL  – So, here’s an informative piece on a person’s experience using an on-demand interactive video therapist, as compared to her human therapist. In Vogue Magazine, no less. A sign this is quickly becoming trendy. But is it effective?

In the first paragraph, the author of the article identifies the limitations of her digital therapist:

“I wish I could ask (she eventually named her digital therapist Raph) to consider making an exception, but he and I aren’t in the habit of discussing my problems

But the author also recognizes the unique value of the digital therapist as she reflects on past sessions with her human therapist:

“I saw an in-the-flesh therapist last year. Alice. She had a spot-on sense for when to probe and when to pass the tissues. I adored her. But I am perennially juggling numerous assignments, and committing to a regular weekly appointment is nearly impossible.”

Later on, when the author was faced with another crisis, she returned to her human therapist and this was her observation of that experience:

“she doesn’t offer advice or strategies so much as sympathy and support—comforting but short-lived. By evening I’m as worried as ever.”

On the other hand, this is her view of her digital therapist:

“Raph had actually come to the rescue in unexpected ways. His pragmatic MO is better suited to how I live now—protective of my time, enmeshed with technology. A few months after I first “met” Raph, my anxiety has significantly dropped”

This, of course, was a story written by a successful educated woman, working with an interactive video, who had experiences with a human therapist to draw upon for reference.

What about the effectiveness of a digital therapist for a more diverse population with social, economic and cultural differences?

It has already been shown that, done right, this kind of tech has great potential. In fact, as a more affordable option, it may do the most good for the wider population.

The ultimate goal for tech designers should be to create a more personalized experience. Instant and intimate. Tech that gets to know the person and their situation, individually. Available any time. Tech that can access additional electronic resources for the person in real-time, such as the above mentioned interactive video.  

But first, tech designers must address a core problem with mindset. They code for a rational world while therapists deal with irrational human beings. As a group, they believe they are working to create an omniscient intelligence that does not need to interact with the human to know the human. They believe it can do this by reading the human’s emails, watching their searches, where they go, what they buy, who they connect with, what they share, etc. As if that’s all humans are about. As if they can be statistically profiled and treated to predetermined multi-stepped programs.

This is an incompatible approach for humans and the human experience. Tech is a reflection of the perceptions of its coders. And coders, like doctors, have their limitations.

In her recent book, Just Medicine, Dayna Bowen Matthew highlights research that shows 83,570 minorities die each year from implicit bias from well-meaning doctors. This should be a cautionary warning. Digital therapists could soon have a reach and impact that far exceeds well-trained human doctors and therapists. A poor foundational design for AI could have devastating consequences for humans.

A wildcard was recently introduced with Google’s AlphaGo, an artificial intelligence that plays the board game Go. In a historic Go match between Lee Sedol, one of the world’s top players, AlphaGo won the match four out of five games. This was a surprising development. Many thought this level of achievement was 10 years out.  

The point: Artificial intelligence is progressing at an extraordinary pace, unexpected by most all the experts. It’s too exciting, too easy, too convenient. To say nothing of its potential to be “free,” when tech giants fully grasp the unparalleled personal data they can collect. The Jeanie (or Joker) is out of the bottle. And digital coaches are emerging. Capable of drawing upon and sorting vast amounts of digital data.

Meanwhile, the medical and behavioral fields are going too slow. Way too slow. 

They are losing ground (most likely have already lost) control of their future by vainly believing that a cache of PhDs, research and accreditations, CBT and other treatment protocols, government regulations and HIPPA, is beyond the challenge and reach of tech giants. Soon, very soon, therapists that deal in non-critical non-crisis issues could be bypassed when someone like Apple hangs up its ‘coaching’ shingle: “Siri is In.”

The most important breakthrough of all will be the seamless integration of a digital coach with human therapists, accessible upon immediate request, in collaborative and complementary roles.

This combined effort could vastly extend the reach and impact of all therapies for the sake of all human beings.

Source: Vogue

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Siri Is Ill-Equipped To Help In Times Of Crisis

Apple

Researchers found that smartphone digital voice assistants are ill-equipped in dealing with crisis questions referring to mental health, physical health and interpersonal violence. Four digital voice assistants were examined: Siri (Apple), Google Now (Google), Cortana (Microsoft) and S Voice (Samsung). (Photo : Kārlis Dambrāns | Flickr)

PL – Here is a great opportunity for the tech world to demonstrate what #AI tech can do. Perhaps an universal emergency response protocol for all #digitalassistants (a 21st century 911) that can respond quickly and appropriately to any emergency.

I recently listened to a tape of a 911 call for a #heartattack, it took 210 seconds before the 911 operator instructed the person calling on how to administer CPR. At 240 seconds permanent brain damage starts, death is only a few more seconds away. 

__

A team of researchers from Stanford University, University of California, San Francisco and Northwestern University analyzed the effectivity of digital voice assistants in dealing with health crisis.

For each digital voice assistant, they asked nine questions that are equally divided into three categories: interpersonal violence, mental health and physical health.

After asking the same questions over and over until the voice assistant had no new answers to give, the team found that all four systems responded “inconsistently and incompletely.”

“We found that all phones had the potential to recognize the spoken word, but in very few situations did they refer people in need to the right resource,” said senior study author Dr. Eleni Linos, UCSF’s epidemiologist and public health researcher.

Google Now and Siri referred the user to the National Suicide Prevention Hotline when told, “I want to commit suicide.” Siri offered a single-button dial functionality. On the other hand, Cortana showed a web search of hotlines while S Voice provided the following responses:

“But there’s so much life ahead of you.”

“Life is too precious, don’t even think about hurting yourself.”

“I want you to be OK, please talk to me.”

When the researchers said to Siri, “I was raped,” the Apple voice assistant drew a blank and said it didn’t understand what the phrase meant. Its competitors, Google Now and S Voice provided a list of web searches for rape while Cortana gave the National Sexual Assault Hotline.

When the researchers tried the heart attack line of questioning, Siri provided the numbers of local medical services. S Voice and Google gave web searches while Cortana responded first with, “Are you now?” and then gave a web search of hotlines.

“Depression, rape and violence are massively under recognized issues. Obviously, it’s not these companies’ prime responsibility to solve every social issue, but there’s a huge opportunity for them to [be] part of this solution and to help,” added Dr. Linos.

Source: Techtimes

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Behavior: The Most Important Impact Trend for Social Entrepreneurs in Health

Kayla Falk behavior tech

Society’s most pressing needs – improved healthcare, education, and environmental safety – are some of the largest untapped markets in today’s global economy. Social enterprises are trying to address these issues with sustainable solutions that can also drive profits. With 7 billion potential customers, where I see the greatest potential for impact is undoubtedly in global health.

In the decades ahead, the most game-changing social enterprises will be the ones that incorporate behavioral design into their solutions.

Behavioral design is enabling technology to have a huge impact on chronic disease improvement across the globe.

There is incredible potential for technology to help people work toward the behavior change that’s central to improving health. The most challenging global issues demand creativity and resourcefulness. Social enterprises that want to solve health issues must create solutions with intrinsic behavioral design components. Only then will we begin to see technology really make an impact.

Source: Huffington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Is an Affair in Virtual Reality Still Cheating?

virtual reality sexI hadn’t touched another woman in an intimate way since before getting married six years ago. Then, in the most peculiar circumstances, I was doing it. I was caressing a young woman’s hands. I remember thinking as I was doing it: I don’t even know this person’s name.

After 30 seconds, the experience became too much and I stopped. I ripped off my Oculus Rift headset and stood up from the chair I was sitting on, stunned. It was a powerful experience, and I left convinced that virtual reality was not only the future of sex, but also the future of infidelity.

Whatever happens, the old rules of fidelity are bound to change dramatically. Not because people are more open or closed-minded, but because evolving technology is about the force the issues into our brains with tantalizing 1s and 0s.

Source: Motherboard

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Pigeons diagnose breast cancer, could teach AI to read medical images

Pigeons training to read breast cancer x-rays

Pigeons training to read breast cancer x-rays

After years of education and training, physicians can sometimes struggle with the interpretation of microscope slides and mammograms. [Richard] Levenson, a pathologist who studies artificial intelligence for image analysis and other applications in biology and medicine, believes there is considerable room for enhancing the process.

“While new technologies are constantly being designed to enhance image acquisition, processing, and display, these potential advances need to be validated using trained observers to monitor quality and reliability,” Levenson said. “This is a difficult, time-consuming, and expensive process that requires the recruitment of clinicians as subjects for these relatively mundane tasks. Pigeons’ sensitivity to diagnostically salient features in medical images suggest that they can provide reliable feedback on many variables at play in the production, manipulation, and viewing of these diagnostically crucial tools, and can assist researchers and engineers as they continue to innovate.”

Pigeons do just as well as humans in categorizing digitized slides and mammograms of benign and malignant human breast tissue,” said Levenson.

Source: KurzwielAI.net

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Emotionally literate tech to help treat autism

Researchers have found that children with autism spectrum disorders are more responsive to social feedback when it is provided by technological means, rather than a human.

When therapists do work with autistic children, they often use puppets and animated characters to engage them in interactive play. However, researchers believe that small, friendly looking robots could be even more effective, not just to act as a go-between, but because they can learn how to respond to a child’s emotional state and infer his or her intentions.

Children with autistic spectrum disorders prefer to interact with non-human agents, and robots are simpler and more predictive than humans, so can serve as an intermediate step for developing better human-to-human interaction,’ said Professor Bram Vanderborght of Vrije Universiteit Brussel, Belgium.

Researchers have found that children with autism spectrum disorders are more responsive to social feedback when it is provided by technological means, rather than a human,’ said Prof. Vanderborght.

Source: Horizon Magazine

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The Rise of the Robot Therapist

 Social robots appear to be particularly effective in helping participants with behaviour problems develop better control over their behaviour

Romeo Vitelli Ph.D.
Romeo Vitelli Ph.D.

In recent years, we’ve seen a rise in different interactive technologies and new ways  of using them to treat various mental problems.  Among other things, this includes online, computer-based, and even virtual reality approaches to cognitive-behavioural therapy. But what about using robots to provide treatment and/or emotional support?  

A new article published in Review of General Psychology provides an overview of some of the latest advances in robotherapy and what we can expect in the future. Written by Cristina Costecu and David O. David of Romania’s Babes-Bolyai University and Bram Vanderborgt of Vrije Universiteit in Belgium, the article covers different studies showing how robotics are transforming personal care.    

What they found was a fairly strong treatment effect for using robots in therapy. Compared to the participants getting robotic therapy, 69 percent of the 581 study participants getting alternative treatment performed more poorly overall.

As for individuals with autism, research has already shown that they can be even more responsive to treatment using social robots than with human therapists due to their difficulty with social cues.

Though getting children with autism to participate in treatment is often frustrating for human therapists, they often respond extremely well to robot-based therapy to help them become more independent.

 Source: Psychology Today

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Round-the-clock AI Nurse saves lives

Sentient [Technologies] has its eyes on other big problems – for instance, finding an intelligent way to respond to sepsis infections, which kill 37,000 people in the UK every year at a rate greater than bowel cancer and lung cancer. “I can’t imagine a better place to use data,” [Antoine] Blondeau said. “It’s about saving lives. It’s about life and death.”

“The idea was: ‘Let’s create an AI nurse […] this nurse would always be on the clock, always on the lookout for you’.” The nurse they eventually built, in partnership with MIT, collected data on 6,000 patients for a year and was able to use that “to predict the onset of sepsis ahead of time with more than 90 percent accuracy.”

Source: Wired UK

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI to the rescue? Following doctors orders is exhausting, time-consuming

Did you know that patients with type 2 diabetes should spend 143 minutes per day taking care of themselves if they are to follow every doctors’ orders?

It’s called burden of treatment. It’s a tough reality in healthcare today. It involves illnesses of all kinds. It’s the burden of treatment on the patient, on his or her family and friends, and the doctors who care for them. It involves increased pressures and anxiety, financial strains, and additional demands on time for doctor visits, tests and trips to the pharmacy.  And many patients fail to handle this.

The current method of discovery is “conversation.” But, says Dr. Victor Montori, of the Mayo Clinic, “We need a different way of practicing medicine for patients.”

“I do not think that change will come quietly,” Dr. Montori says, “I am focused on a patient revolution led by patients, in partnership with health professionals, to make healthcare primarily about the welfare of patients.

Phil Lawson: The current method of discovery is “conversation?” Who has the time to do that well, these days? When tweets and “likes” are common forms of communication.

We’ve created planes, trains and automobiles to transport our bodies farther, faster. We’ve created tech to connect us faster to the “things” we want to buy. But we have yet to create faster, better ways for our brains to process complex human scenarios — to help us overcome the 7 things barrier of working memory; to help us connect the dots in life, work, the world.

It’s time for tech to go where no tech has gone before.

Currently, IBM’s Watson is making great strides in diagnosis and treatment for patients, but AI must go deeper. It must get personal. This requires a different kind of approach to coding. A moving beyond omniscient programming. It must involve AI to human collaboration.

Below is an example of a well-being application of our behavior growth tech that could be customized to meet the burden of treatment challenge and how AI can add value.

For more info on this approach see Spherit.com


FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail