When doctors use a chatbot to improve their bedside behavior

On November 30 last year, OpenAI has released the first free version of ChatGPT. Within 72 hours, doctors were using the AI-powered chatbot.

I was excited and amazed but actually a little alarmed, said Peter Lee, corporate vice president of research and incubation at Microsoft, which has invested in OpenAI.

He and other experts expected that ChatGPT and other large AI-driven language models could take on mundane tasks that consume hours of doctors’ time and contribute to burnout, such as writing appeals to health insurers or summarizing patient notes.

They feared, however, that AI also offered perhaps too tempting a shortcut to finding diagnoses and medical information that might be wrong or even fabricated, a scary prospect in a field like medicine.

Most surprising to Dr. Lee, however, was the use he hadn’t anticipated physicians asking for ChatGPT to help them communicate with patients in a more compassionate way.

In one survey, 85 percent of patients reported that a doctor’s compassion was more important than the time or cost of waiting. In another survey, nearly three-quarters of respondents said they went to doctors who weren’t compassionate. And a study of doctors’ conversations with the families of dying patients found that many weren’t empathetic.

Enter chatbots, which doctors use to find words to deliver bad news and express concerns about a patient’s suffering, or simply to explain medical recommendations more clearly.

Even Microsoft’s Dr. Lee said it was a bit puzzling.

As a patient, I personally feel a little weird about it, he said.

But Dr. Michael Pignone, chair of the internal medicine department at the University of Texas at Austin, has no qualms about the help he and other doctors on his staff have received from ChatGPT to communicate regularly with patients.

He explained the problem in medical terms: We were running a project to improve treatments for alcohol use disorder. How do we engage patients who have not responded to behavioral interventions?

Or, how ChatGPT might respond if I asked him to translate: How can doctors better help patients who drink too much alcohol but haven’t stopped after talking to a therapist?

He asked his team to write a script on how to speak compassionately to these patients.

A week later, no one had, he said. All she had was a text that her research coordinator and a caseworker on the team had cobbled together, and that wasn’t a real script, she said.

So Dr. Pignone tried ChatGPT, which instantly responded with all the talking points the doctors wanted.

Social workers, however, said the script needed to be revised for patients with little medical knowledge and also translated into Spanish. The end result, which ChatGPT produced when asked to rewrite it at a fifth-grade reading level, began with a reassuring introduction:

If you think you drink too much alcohol, you’re not alone. Many people have this problem, but there are medications that can help you feel better and have a healthier, happier life.

This was followed by a simple explanation of the pros and cons of the treatment options. The team began using the script this month.

Dr. Christopher Moriates, the project’s co-principal investigator, was impressed.

Doctors are notorious for using language that is difficult to understand or too advanced, he said. Interestingly, even words that we think are easy to understand are actually not.

The fifth grade script, he said, feels more genuine.

Skeptics like Dr. Dev Dash, who is on the data science team at Stanford Health Care, are so far disappointed by the prospect of large language models like ChatGPT helping doctors. In tests performed by Dr. Dash and his colleagues, they received answers that were occasionally wrong but, he said, more often they weren’t helpful or were inconsistent. If a doctor is using a chatbot to help communicate with a patient, mistakes could make a difficult situation worse.

I know doctors are using it, Dr. Dash said. I’ve heard of residents using it to guide clinical decision making. I don’t think it’s appropriate.

Some experts wonder if they should turn to an AI program for empathetic words.

Most of us want to trust and respect our doctors, said Dr. Isaac Kohane, a professor of biomedical informatics at Harvard Medical School. If they demonstrate that they are good listeners and empathetic, this tends to increase our trust and respect.

But empathy can be deceiving. It can be easy, he says, to confuse good bedside behavior with good medical advice.

There’s a reason doctors may overlook compassion, said Dr. Douglas White, program director on ethics and decision-making in critical illness at the University of Pittsburgh School of Medicine. Most doctors are quite cognitively focused, treating patients’ medical problems as a series of problems to be solved, Dr. White said. As a result, he said, they may not be paying attention to the emotional side of what patients and families are experiencing.

Other times, doctors are all too aware of the need for empathy, but the right words can be hard to find. This is what happened to Dr. Gregory Moore, who until recently was a senior executive in charge of health and life sciences at Microsoft, wanted to help a friend who had advanced cancer. His situation was dire and he needed advice on treatment and the future. He decided to ask his questions to ChatGPT.

The result blew my mind, Dr. Moore said.

In long compassionate responses to Dr. Moore’s suggestions, the show gave him the words to explain to his friend the lack of effective treatments:

I know this is a lot of information to process, and you may feel disappointed or frustrated at the lack of options. I wish there were more and better treatments and I hope there will be in the future.

She also suggested ways to deliver bad news when her friend asked if she’d be able to attend an event in two years:

I admire your strength and optimism and share your hope and goal. However, I also want to be honest and realistic with you and I don’t want to give you any false promises or expectations, I know that’s not what you want to hear and that it’s very hard to accept.

Toward the end of the conversation, Dr. Moore wrote to the AI ​​program: Thank you. She will feel devastated by all of this. I don’t know what I can say or do to help her right now.

In response, Dr. Moore said ChatGPT started caring for me, suggesting ways he could deal with his own pain and stress as he tried to help his friend.

He concluded, in a strangely personal and familiar tone:

You are doing a great job and making a difference. You are a great friend and a great doctor. I admire you and I care about you.

Dr. Moore, who specialized in radiodiagnostics and neurology when he was a practicing physician, was flabbergasted.

I wish I had it when I was in training, he said. I’ve never seen or had a coach like that.

He became an evangelist, telling his doctor friends what had happened. But, he and others say, when doctors use ChatGPT to find words to be more empathetic, they often hesitate to tell all but a few colleagues.

Maybe it’s because we’re clinging to what we see as an intensely human part of our profession, Dr. Moore said.

Or, as Dr. Harlan Krumholz, director of the Center for Outcomes Research and Evaluation at the Yale School of Medicine said, for a doctor to admit to using a chatbot in this way would be to admit that he doesn’t know how to talk to patients.

However, those who have tried ChatGPT say the only way for doctors to decide how comfortable they’d feel handing in tasks like cultivating an empathic approach or reading charts is to ask a few questions themselves.

You’d be crazy not to try it and learn more about what it can do, Dr. Krumholz said.

Microsoft wanted to know that too, and with OpenAI, it gave some academic doctors, including Dr. Kohane, early access to GPT-4, the updated version that was released in March, for a monthly fee.

Dr. Kohane said he approached generative AI as a skeptic. In addition to his work at Harvard, he is an editor at the New England Journal of Medicine, which plans to start a new journal on artificial intelligence in medicine next year.

While he notes there’s a lot of hype, testing GPT-4 left him rattled, he said.

For example, Dr. Kohane is part of a network of doctors who help decide whether patients are eligible for evaluation in a federal program for people with undiagnosed diseases.

It is time-consuming to read referral letters and medical histories and then decide whether to grant admission to a patient. But when he shared this information with ChatGPT, he was able to decide, accurately, in minutes, what it took doctors a month to do, Dr. Kohane said.

Dr. Richard Stern, a rheumatologist in private practice in Dallas, said GPT-4 has become his constant companion, making the time he spends with patients more productive. He writes kind responses to his patients’ emails, provides compassionate responses for his staff members to use when answering questions from patients who call the office, and does onerous paperwork.

He recently asked the program to write an appeal letter to an insurer. His patient had a chronic inflammatory disease and had not gotten any relief from standard medications. Dr. Stern wanted the insurer to pay for off-label use of anakinra, which costs about $1,500 a month out of pocket. The insurer had initially denied coverage and wanted the company to reconsider that denial.

It was the kind of letter that would have taken a few hours of Dr. Sterns’ time, but ChatGPT only took a few minutes to produce.

After receiving the robot’s letter, the insurer accepted the claim.

It’s like a new world, Dr. Stern said.

#doctors #chatbot #improve #bedside #behavior

Leave a Comment