Where doctors fear to go: AI to the rescue?

Maybe there’s a role for AI in the death and dying conversation

“It’s the rare physician who prepares patients to die well, or who will even acknowledge that death is possible, much less imminent. This is a major issue in how doctors interact with their patients — and although I’ve been an ER physician for more than 25 years, it was my father’s illness that made me realize the enormity of the problem.” Brian Goldman, ER physician*
— the falacy of giving up
In his new  book Being Mortal, Harvard professor and surgeon Atul Gawande’s ultimate message is that “death in America is not often enough discussed, and that patients suffer at the hands of well-meaning doctors because of it.”

Harvard physician, Angelo Volandes a hospital-medicine specialist at Massachusetts General Hospital “notes that while medical students and resident physicians today are being trained in end-of-life counseling, most doctors currently in practice graduated medical school before palliative care was ubiquitous, and before The Conversation” [the title of his new book about the importance of talking about death and making this a part of medical curricula.]

“We think our job is to ensure health and survival. But really, it is larger than that.”

“Practicing doctors are a tough group to try and train. That’s what we’ve been trying to do for the past 10 years, and we’ve barely moved the needle. I have much more faith in patients bringing this conversation to doctors than waiting for doctors.”


Is The Conversation dangerous for the patient

In one large study Dr Gawande cites, patients even ended up living just as long when they went into hospice care as did their aggressively medicalized counterparts. Of 4,493 people studied, the researchers concluded, mean survival after three years was actually 29 days longer for hospice patients than for non-hospice patients.”

Source: The Atlantic article: The Fallacy of ‘Giving Up’
James Hamblin

*Brian Goldman quote source: chatelaine.com : An ER doctor sees the health care system through a patient’s eyes

PL – Here’s an idea: Create an intelligent agent capable of starting The Conversation between patient and family. An intelligent agent socialized for human interaction that could learn the patient: his/her views, wants, desires, as it relates to the reality of the person’s health. Socialized AI would not make recommendations or direct a course of action, but rather, interact with the patient to inform/guide the patient in meaningful conversations with family and doctors. 

If doctors and nurses are focused on saving lives, and this kind of conversation is difficult for them to initiate, perhaps this is an example of what socialized AI can do to serve doctors and nurses.