The Foundation we’ve built for AI- Human Collaboration.

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A Hippocratic Oath for artificial intelligence practitioners

                                                                                         Getty Images

In the forward to Microsoft’s recent book, The Future Computed, executives Brad Smith  and Harry Shum  proposed that Artificial Intelligence (AI) practitioners highlight their ethical commitments by taking an oath analogous to the Hippocratic Oath sworn by doctors for generations.

In the past, much power and responsibility over life and death was concentrated in the hands of doctors.

Now, this ethical burden is increasingly shared by the builders of AI software.

Future AI advances in medicine, transportation, manufacturing, robotics, simulation, augmented reality, virtual reality, military applications, dictate that AI be developed from a higher moral ground today.

In response, I (Oren Etzioni) edited the modern version of the medical oath to address the key ethical challenges that AI researchers and engineers face …

The oath is as follows:

I swear to fulfill, to the best of my ability and judgment, this covenant:

I will respect the hard-won scientific gains of those scientists and engineers in whose steps I walk, and gladly share such knowledge as is mine with those who are to follow.

I will apply, for the benefit of the humanity, all measures required, avoiding those twin traps of over-optimism and uniformed pessimism.

I will remember that there is an art to AI as well as science, and that human concerns outweigh technological ones.

Most especially must I tread with care in matters of life and death. If it is given me to save a life using AI, all thanks. But it may also be within AI’s power to take a life; this awesome responsibility must be faced with great humbleness and awareness of my own frailty and the limitations of AI. Above all, I must not play at God nor let my technology do so.

I will respect the privacy of humans for their personal data are not disclosed to AI systems so that the world may know.

I will consider the impact of my work on fairness both in perpetuating historical biases, which is caused by the blind extrapolation from past data to future predictions, and in creating new conditions that increase economic or other inequality.

My AI will prevent harm whenever it can, for prevention is preferable to cure.

My AI will seek to collaborate with people for the greater good, rather than usurp the human role and supplant them.

I will remember that I am not encountering dry data, mere zeros and ones, but human beings, whose interactions with my AI software may affect the person’s freedom, family, or economic stability. My responsibility includes these related problems.

I will remember that I remain a member of society, with special obligations to all my fellow human beings.

Source: TechCrunch – Oren Etzioni

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

This is an opportunity for me to correct a wrong – Center for Humane Technology

Early Facebook and Google Employees Form Coalition to Fight What They Built

Jim Steyer, left, and Tristan Harris in Common Sense’s headquarters. Common Sense is helping fund the The Truth About Tech campaign. Peter Prato for The New York Times

A group of Silicon Valley technologists who were early employees at Facebook and Google, alarmed over the ill effects of social networks and smartphones, are banding togethe to challenge the companies they helped build.

The cohort is creating a union of concerned experts called the Center for Humane Technology. Along with the nonprofit media watchdog group Common Sense Media, it also plans an anti-tech addiction lobbying effort and an ad campaign at 55,000 public schools in the United States.

The campaign, titled The Truth About Tech

“We were on the inside,” said Tristan Harris, a former in-house ethicist at Google who is heading the new group. “We know what the companies measure. We know how they talk, and we know how the engineering works.”

An unprecedented alliance of former employees of some of today’s biggest tech companies. Apart from Mr. Harris, the center includes Sandy Parakilas, a former Facebook operations manager; Lynn Fox, a former Apple and Google communications executive; Dave Morin, a former Facebook executive; Justin Rosenstein, who created Facebook’s Like button and is a co-founder of Asana; Roger McNamee, an early investor in Facebook; and Renée DiResta, technologist who studies bots.

 

“Facebook appeals to your lizard brain — primarily fear and anger. And with smartphones, they’ve got you for every waking moment. This is an opportunity for me to correct a wrong.” Roger McNamee, an early investor in Facebook

Source: NYT



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Intelligent Machines Forget Killer Robots—Bias Is the Real AI Danger

John Giannandrea – GETTY

John Giannandrea, who leads AI at Google, is worried about intelligent systems learning human prejudices.

… concerned about the danger that may be lurking inside the machine-learning algorithms used to make millions of decisions every minute.

The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries, and that almost no one is making an effort to identify or correct it.

Karrie Karahalios, a professor of computer science at the University of Illinois, presented research highlighting how tricky it can be to spot bias in even the most commonplace algorithms. Karahalios showed that users don’t generally understand how Facebook filters the posts shown in their news feed. While this might seem innocuous, it is a neat illustration of how difficult it is to interrogate an algorithm.

Facebook’s news feed algorithm can certainly shape the public perception of social interactions and even major news events. Other algorithms may already be subtly distorting the kinds of medical care a person receives, or how they get treated in the criminal justice system.

This is surely a lot more important than killer robots, at least for now.

Source: MIT Technology Review



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The idea was to help you and I make better decisions amid cognitive overload

IBM Chairman, President, and Chief Executive Officer Ginni Rometty. PHOTOGRAPHER: STEPHANIE SINCLAIR FOR BLOOMBERG BUSINESSWEEK

If I considered the initials AI, I would have preferred augmented intelligence.

It’s the idea that each of us are going to need help on all important decisions.

A study said on average that a third of your decisions are really great decisions, a third are not optimal, and a third are just wrong. We’ve estimated the market is $2 billion for tools to make better decisions.

That’s what led us all to really calling it cognitive

“Look, we really think this is about man and machine, not man vs. machine. This is an era—really, an era that will play out for decades in front of us.”

We set out to build an AI platform for business.

AI would be vertical. You would train it to know medicine. You would train it to know underwriting of insurance. You would train it to know financial crimes. Train it to know oncology. Train it to know weather. And it isn’t just about billions of data points. In the regulatory world, there aren’t billions of data points. You need to train and interpret something with small amounts of data.

This is really another key point about professional AI. Doctors don’t want black-and-white answers, nor does any profession. If you’re a professional, my guess is when you interact with AI, you don’t want it to say, “Here is an answer.”

What a doctor wants is, “OK, give me the possible answers. Tell my why you believe it. Can I see the research, the evidence, the ‘percent confident’? What more would you like to know?”

It’s our responsibility if we build this stuff to guide it safely into the world.

Source: Bloomberg



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Siri as a therapist, Apple is seeking engineers who understand psychology

PL – Looks like Siri needs more help to understand.

Apple Job Opening Ad

“People have serious conversations with Siri. People talk to Siri about all kinds of things, including when they’re having a stressful day or have something serious on their mind. They turn to Siri in emergencies or when they want guidance on living a healthier life. Does improving Siri in these areas pique your interest?

Come work as part of the Siri Domains team and make a difference.

We are looking for people passionate about the power of data and have the skills to transform data to intelligent sources that will take Siri to next level. Someone with a combination of strong programming skills and a true team player who can collaborate with engineers in several technical areas. You will thrive in a fast-paced environment with rapidly changing priorities.”

The challenge as explained by Ephrat Livni on Quartz

The position requires a unique skill set. Basically, the company is looking for a computer scientist who knows algorithms and can write complex code, but also understands human interaction, has compassion, and communicates ably, preferably in more than one language. The role also promises a singular thrill: to “play a part in the next revolution in human-computer interaction.”

The job at Apple has been up since April, so maybe it’s turned out to be a tall order to fill. Still, it shouldn’t be impossible to find people who are interested in making machines more understanding. If it is, we should probably stop asking Siri such serious questions.

Computer scientists developing artificial intelligence have long debated what it means to be human and how to make machines more compassionate. Apart from the technical difficulties, the endeavor raises ethical dilemmas, as noted in the 2012 MIT Press book Robot Ethics: The Ethical and Social Implications of Robotics.

Even if machines could be made to feel for people, it’s not clear what feelings are the right ones to make a great and kind advisor and in what combinations. A sad machine is no good, perhaps, but a real happy machine is problematic, too.

In a chapter on creating compassionate artificial intelligence (pdf), sociologist, bioethicist, and Buddhist monk James Hughes writes:

Programming too high a level of positive emotion in an artificial mind, locking it into a heavenly state of self-gratification, would also deny it the capacity for empathy with other beings’ suffering, and the nagging awareness that there is a better state of mind.

Source: Quartz

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

80% of what human physicians currently do will soon be done instead by technology, allowing physicians to

Data-driven AI technologies are well suited to address chronic inefficiencies in health markets, potentially lowering costs by hundreds of billions of dollars, while simultaneously reducing the time burden on physicians.

These technologies can be leveraged to capture the massive volume of data that describes a patient’s past and present state, project potential future states, analyze that data in real time, assist in reasoning about the best way to achieve patient and physician goals, and provide both patient and physician constant real-time support. Only AI can fulfill such a mission. There is no other solution.

Technologist and investor Vinod Khosla posited that 80 percent of what human physicians currently do will soon be done instead by technology, allowing physicians to focus their time on the really important elements of patient physician interaction.

Within five years, the healthcare sector has the potential to undergo a complete metamorphosis courtesy of breakthrough AI technologies. Here are just a few examples:

1. Physicians will practice with AI virtual assistants (using, for example, software tools similar to Apple’s Siri, but specialized to the specific healthcare application).

2. Physicians with AI virtual assistants will be able to treat 5X – 10X as many patients with chronic illnesses as they do today, with better outcomes than in the past.

Patients will have a constant “friend” providing a digital health conscience to advise, support, and even encourage them to make healthy choices and pursue a healthy lifestyle.

3. AI virtual assistants will support both patients and healthy individuals in health maintenance with ongoing and real-time intelligent advice.

Our greatest opportunity for AI-enhancement in the sector is keeping people healthy, rather than waiting to treat them when they are sick. AI virtual assistants will be able to acquire deep knowledge of diet, exercise, medications, emotional and mental state, and more.

4. Medical devices previously only available in hospitals will be available in the home, enabling much more precise and timely monitoring and leading to a healthier population.

5. Affordable new tools for diagnosis and treatment of illnesses will emerge based on data collected from extant and widely adopted digital devices such as smartphones.

6. Robotics and in-home AI systems will assist patients with independent living.

But don’t be misled — the best metaphor is that they are learning like humans learn and that they are in their infancy, just starting to crawl. Healthcare AI virtual assistants will soon be able to walk, and then run.

Many of today’s familiar AI engines, personified in Siri, Cortana, Alexa, Google Assistant or any of the hundreds of “intelligent chatbots,” are still immature and their capabilities are highly limited. Within the next few years they will be conversational, they will learn from the user, they will maintain context, and they will provide proactive assistance, just to name a few of their emerging capabilities.

And with these capabilities applied in the health sector, they will enable us to keep millions of citizens healthier, give physicians the support and time they need to practice, and save trillions of dollars in healthcare costs. Welcome to the age of AI.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

A blueprint for coexistence with #AI

In September 2013, I was diagnosed with fourth-stage lymphoma.

This near-death experience has not only changed my life and priorities, but also altered my view of artificial intelligence—the field that captured my selfish attention for all those years.

This personal reformation gave me an enlightened view of what AI should mean for humanity. Many of the recent discussions about AI have concluded that this scientific advance will likely take over the world, dominate humans, and end poorly for mankind.

But my near-death experience has enabled me to envision an alternate ending to the AI story—one that makes the most of this amazing technology while empowering humans not just to survive, but to thrive.

Love is what is missing from machines. That’s why we must pair up with them, to leaven their powers with what only we humans can provide. Your future AI diagnostic tool may well be 10 times more accurate than human doctors, but patients will not want a cold pronouncement from the tool: “You have fourth stage lymphoma and a 70 percent likelihood of dying within five years.” That in itself would be harmful.

Kai-Fu Lee. DAVID PAUL MORRIS/ BLOOMBERG

Patients would benefit, in health and heart, from a “doctor of love” who will spend as much time as the patient needs, always be available to discuss their case, and who will even visit the patients at home. This doctor might encourage us by sharing stories such as, “Kai-Fu had the same lymphoma, and he survived, so you can too.”

This kind of “doctor of love” would not only make us feel better and give us greater confidence, but would also trigger a placebo effect that would increase our likelihood of recuperation. Meanwhile, the AI tool would watch the Q&A between the “doctor of love” and the patient carefully, and then optimize the treatment. If scaled across the world, the number of “doctors of love” would greatly outnumber today’s doctors.

Let us choose to let machines be machines, and let humans be humans. Let us choose to use our machines, and love one another.

Kai-Fu Lee, Ph.D., is the Founder and CEO of Sinovation Ventures and the president of its Artificial Intelligence Institute.

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

The holy grail is modifying patients’ behavior – #AI

Companies like DexCom are focused on the diabetes epidemic, Jimenez said

the holy grail is modifying patients’ behavior.

That would mean combining the stream of data from glucose monitoring, insulin measurements, patient activity and meals, and applying machine learning to derive insights so the software can send alerts and recommendations back to patients and their doctors, she said.

“But where we are in our maturity as an industry is just publishing numbers,”

Jimenez explained. “So we’re just telling people what their glucose number is, which is critical for a type 1 diabetic. But a type 2 diabetic needs to engage with an app, and be compelled to interact with the insights. It’s really all about the development of the app.”

The ultimate goal, perhaps, would be to develop a user interface that uses the insights gained from machine learning to actually prompt diabetic patients to change their behavior.

This point was echoed by Jean Balgrosky, an investor who spent 20 years as the CIO of large, complex healthcare organizations such as San Diego’s Scripps Health. “At the end of the day,” she said, “all this machine learning has to be absorbed and consumed by humans—to take care of humans in healthcare.”

Source: Xconomy

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Artificial Intelligence Key To Treating Illness

UC and one of its graduates have teamed up to use artificial intelligence to analyze the fMRIs of bipolar patients to determine treatment.

In a proof of concept study, Dr. Nick Ernest harnessed the power of his Psibernetix AI program to determine if bipolar patients could benefit from a certain medication. Using fMRIs of bipolar patients, the software looked at how each patient would react to lithium.

Fuzzy Logic appears to be very accurate

The computer software predicted with 100 percent accuracy how patients would respond. It also predicted the actual reduction in manic symptoms after the lithium treatment with 92 percent accuracy.

UC psychiatrist David Fleck partnered with Ernest and Dr. Kelly Cohen on the study. Fleck says without AI, coming up with a treatment plan is difficult. “Bipolar disorder is a very complex genetic disease. There are multiple genes and not only are there multiple genes, not all of which we understand and know how they work, there is interaction with the environment.

Ernest emphasizes the advanced software is more than a black box. It thinks in linguistic sentences. “So at the end of the day we can go in and ask the thing why did you make the prediction that you did? So it has high accuracy but also the benefit of explaining exactly why it makes the decision that it did.”

More tests are needed to make sure the artificial intelligence continues to accurately predict medication for bipolar patients.

Source: WVXU

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

This adorable #chatbot wants to talk about your mental health

Research conducted by the federal government in 2015 found that only 41 percent of U.S. adults with a mental health condition in the previous year had gotten treatment. That dismal treatment rate has to do with cost, logistics, stigma, and being poorly matched with a professional.

Chatbots are meant to remove or diminish these barriers. Creators of mobile apps for depression and anxiety, among other mental health conditions, have argued the same thing, but research found that very few of the apps are based on rigorous science or are even tested to see if they work. 

That’s why Alison Darcy, a clinical psychologist at Stanford University and CEO and founder of Woebot wants to set a higher standard for chatbots. Darcy co-authored a small study published this week in the Journal of Medical Internet Research that demonstrated Woebot can reduce symptoms of depression in two weeks.

Woebot presumably does this in part by drawing on techniques from cognitive behavioral therapy (CBT), an effective form of therapy that focuses on understanding the relationship between thoughts and behavior. He’s not there to heal trauma or old psychological wounds. 

“We don’t make great claims about this technology,” Darcy says. “The secret sauce is how thoughtful [Woebot] is as a CBT therapist. He has a set of core principles that override everything he does.” 

His personality is also partly modeled on a charming combination of Spock and Kermit the Frog.

Jonathan Gratch, director for virtual human research at the USC Institute for Creative Technologies, has studied customer service chatbots extensively and is skeptical of the idea that one could effectively intuit our emotional well-being.  

“State-of-the-art natural language processing is getting increasingly good at individual words, but not really deeply understanding what you’re saying,” he says.

The risk of using a chatbot for your mental health is manifold, Gratch adds.

Darcy acknowledges Woebot’s limitations. He’s only for those 18 and over. If your mood hasn’t improved after six weeks of exchanges, he’ll prompt you to talk about getting a “higher level of care.” Upon seeing signs of suicidal thoughts or behavior, Woebot will provide information for crisis phone, text, and app resources. The best way to describe Woebot, Darcy says, is probably as “gateway therapy.”

“I have to believe that applications like this can address a lot of people’s needs.”

Source: Mashable

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We Need to Talk About the Power of #AI to Manipulate Humans

Liesl Yearsley is a serial entrepreneur now working on how to make artificial intelligence agents better at problem-solving and capable of forming more human-like relationships.

From 2007 to 2014 I was CEO of Cognea, which offered a platform to rapidly build complex virtual agents … acquired by IBM Watson in 2014.

As I studied how people interacted with the tens of thousands of agents built on our platform, it became clear that humans are far more willing than most people realize to form a relationship with AI software.

I always assumed we would want to keep some distance between ourselves and AI, but I found the opposite to be true. People are willing to form relationships with artificial agents, provided they are a sophisticated build, capable of complex personalization.

We humans seem to want to maintain the illusion that the AI truly cares about us.

This puzzled me, until I realized that in daily life we connect with many people in a shallow way, wading through a kind of emotional sludge. Will casual friends return your messages if you neglect them for a while? Will your personal trainer turn up if you forget to pay them? No, but an artificial agent is always there for you. In some ways, it is a more authentic relationship.

This phenomenon occurred regardless of whether the agent was designed to act as a personal banker, a companion, or a fitness coach. Users spoke to the automated assistants longer than they did to human support agents performing the same function.

People would volunteer deep secrets to artificial agents, like their dreams for the future, details of their love lives, even passwords.

These surprisingly deep connections mean even today’s relatively simple programs can exert a significant influence on people—for good or ill.

Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.

Systems specifically designed to form relationships with a human will have much more power. AI will influence how we think, and how we treat others.

This requires a new level of corporate responsibility. We need to deliberately and consciously build AI that will improve the human condition—not just pursue the immediate financial gain of gazillions of addicted users.

We need to consciously build systems that work for the benefit of humans and society. They cannot have addiction, clicks, and consumption as their primary goal. AI is growing up, and will be shaping the nature of humanity.

AI needs a mother.

Source: MIT Technology Review 



FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

We’re so unprepared for the robot apocalypse

Industrial robots alone have eliminated up to 670,000 American jobs between 1990 and 2007

It seems that after a factory sheds workers, that economic pain reverberates, triggering further unemployment at, say, the grocery store or the neighborhood car dealership.

In a way, this is surprising. Economists understand that automation has costs, but they have largely emphasized the benefits: Machines makes things cheaper, and they free up workers to do other jobs.

The latest study reveals that for manufacturing workers, the process of adjusting to technological change has been much slower and more painful than most experts thought. 

every industrial robot eliminated about three manufacturing positions, plus three more jobs from around town

“We were looking at a span of 20 years, so in that timeframe, you would expect that manufacturing workers would be able to find other employment,” Restrepo said. Instead, not only did the factory jobs vanish, but other local jobs disappeared too.

This evidence draws attention to the losers — the dislocated factory workers who just can’t bounce back

one robot in the workforce led to the loss of 6.2 jobs within a commuting zone where local people travel to work.

The robots also reduce wages, with one robot per thousand workers leading to a wage decline of between 0.25 % and 0.5 % Fortune

.None of these efforts, though, seem to be doing enough for communities that have lost their manufacturing bases, where people have reduced earnings for the rest of their lives.

Perhaps that much was obvious. After all, anecdotes about the Rust Belt abound. But the new findings bolster the conclusion that these economic dislocations are not brief setbacks, but can hurt areas for an entire generation.

How do we even know that automation is a big part of the story at all? A key bit of evidence is that, despite the massive layoffs, American manufacturers are making more stuff than ever. Factories have become vastly more productive.

some consultants believe that the number of industrial robots will quadruple in the next decade, which could mean millions more displaced manufacturing workers

The question, now, is what to do if the period of “maladjustment” that lasts decades, or possibly a lifetime, as the latest evidence suggests.

automation amplified opportunities for people with advanced skills and talents

Source: The Washington Post

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

These chatbots may one day even replace your doctor

As artificial intelligence programs learn to better communicate with humans, they’ll soon encroach on careers once considered untouchable, like law and accounting.

These chatbots may one day even replace your doctor.

This January, the United Kingdom’s National Health Service launched a trial with Babylon Health, a startup developing an AI chatbot. 

The bot’s goal is the same as the helpline, only without humans: to avoid unnecessary doctor appointments and help patients with over-the-counter remedies.

Using the system, patients chat with the bot about their symptoms, and the app determines whether they should see a doctor, go to a pharmacy, or stay home. It’s now available to about 1.2 million Londoners.

But the upcoming version of Babylon’s chatbot can do even more: In tests, it’s now dianosing patients faster human doctors can, says Dr. Ali Parsa, the company’s CEO. The technology can accurately diagnose about 80 percent of illnesses commonly seen by primary care doctors.

The reason these chatbots are increasingly important is cost: two-thirds of money moving through the U.K.’s health system goes to salaries.

“Human beings are very expensive,” Parsa says. “If we want to make healthcare affordable and accessible for everyone, we’ll need to attack the root causes.”

Globally, there are 5 million fewer doctors today than needed, so anything that lets a doctor do their jobs faster and more easily will be welcome, Parsa says.

Half the world’s population has little access to health care — but they have smartphones. Chatbots could get them the help they need.

Source: NBC News

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Tech Reckons With the Problems It Helped Create

Festival goer is seen at the 2017 SXSW Conference and Festivals in Austin, Texas.

SXSW’s – this year, the conference itself feels a lot like a hangover.

It’s as if the coastal elites who attend each year finally woke up with a serious case of the Sunday scaries, realizing that the many apps, platforms, and doodads SXSW has launched and glorified over the years haven’t really made the world a better place. In fact, they’ve often come with wildly destructive and dangerous side effects. Sure, it all seemed like a good idea in 2013!

But now the party’s over. It’s time for the regret-filled cleanup.

speakers related how the very platforms that were meant to promote a marketplace of ideas online have become filthy junkyards of harassment and disinformation.

Yasmin Green, who leads an incubator within Alphabet called Jigsaw, focused her remarks on the rise of fake news, and even brought two propaganda publishers with her on stage to explain how, and why, they do what they do. For Jestin Coler, founder of the phony Denver Guardian, it was an all too easy way to turn a profit during the election.

“To be honest, my mortgage was due,” Coler said of what inspired him to write a bogus article claiming an FBI agent related to Hillary Clinton’s email investigation was found dead in a murder-suicide. That post was shared some 500,000 times just days before the election.

While prior years’ panels may have optimistically offered up more tech as the answer to what ails tech, this year was decidedly short on solutions.

There seemed to be, throughout the conference, a keen awareness of the limits human beings ought to place on the software that is very much eating the world.

Source: Wired

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

Our minds need medical attention, AI may be able to help there

AI could be useful for more than just developing Siri; it may bring about a new, smarter age of healthcare.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

A team of researchers successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old.

For instance, a team of American researchers used AI to aid detection of autism in babies as young as six months1. This is crucial because the first two years of life see the most neural plasticity when the abnormalities associated with autism haven’t yet fully settled in. This means that earlier intervention is better, especially when many autistic babies are diagnosed at 24 months.

While previous algorithms exist for detecting autism’s development using behavioral data, they have not been effective enough to be clinically useful. This team of researchers sought to improve on these attempts by employing deep learning. Their algorithm successfully predicted diagnoses of autism using MRI data from babies between six and 12 months old. Their system processed images of the babies’ cortical surface area, which grows too rapidly in developing autism. This smarter algorithm predicted autism so well that clinicians may now want to adopt it.

But human ailments aren’t just physical; our minds need medical attention, too. AI may be able to help there as well.

Facebook is beginning to use AI to identify users who may be at risk of suicide, and a startup company just built an AI therapist apparently capable of offering mental health services to anyone with an internet connection.

Source: Machine Design

 

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail

AI is driving the real health care transformation

AI and machine learning are forcing dramatic business model change for all the stakeholders in the health care system.

What does AI (and machine learning) mean in the health care context?

What is the best way to treat a specific patient given her health and sociological context?

What is a fair price for a new drug or device given its impact on health outcomes?

And how can long-term health challenges such as cancer, obesity, heart disease, and other conditions be managed?

the realization that treating “the whole patient” — not just isolated conditions, but attempting to improve the overall welfare of patients who often suffer from multiple health challenges — is the new definition of success, which means predictive insights are paramount.

Answering these questions is the holy grail of medicine — the path toward an entirely new system that predicts disease and delivers personalized health and wellness services to entire populations. And this change is far more important for patients and society alike than the debate now taking place in Washington.

Those who succeed in this new world will also do one other thing: They will see AI and machine learning not as a new tool, but as a whole new way of thinking about their business model.

Source: Venture Beat

FacebooktwitterredditpinterestlinkedinmailFacebooktwitterredditpinterestlinkedinmail