ru24.pro
News in English
Февраль
2025
1 2 3 4 5 6 7 8 9 10 11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28

It’s inoperable cancer. Should AI make call about what happens next?

0

Rebecca Weintraub Brendel, director of Harvard Medical School’s Center for Bioethics.

Veasey Conway/Harvard Staff Photographer

Health

It’s inoperable cancer. Should AI make call about what happens next?

Arrival of large-language models sparking discussion of how use of technology may be broadened in patient care, and what it means to be human

8 min read

AI is already being used in clinics to help analyze imaging data, such as X-rays and scans. But the recent arrival of sophisticated large-language AI models on the scene is forcing consideration of broadening the use of the technology into other areas of patient care. In this edited conversation with the Gazette, Rebecca Weintraub Brendel, director of Harvard Medical School’s Center for Bioethics, looks at end-of-life options and the importance of remembering that just because we can, doesn’t always mean we should.


When we talk about artificial intelligence and end-of-life decision-making, what are the important questions at play?

End-of-life decision-making is the same as other decision-making because ultimately, we do what patients want us to do, provided they are competent to make those decisions and what they want is medically indicated — or at least not medically contraindicated.

One complication would be if a patient is so ill that they can’t tell us what they want. The second challenge is understanding in both a cognitive way and an emotional way what the decision means.

People sometimes say, “I would never want to live that way” but they wouldn’t make the same decision in all circumstances. Patients who’ve lived with progressive neurologic conditions like ALS for a long time often have a sense of when they’ve reached their limit. They’re not depressed or frightened and are ready to make their decision.

On the other hand, depression is quite prevalent in some cancers and people tend to change their minds about wanting to end their lives once symptoms are treated.

So, if someone is young and says, “If I lose my legs, I wouldn’t want to live,” should we allow for shifting perspectives as we get to the end of life?

When we’re faced with something that alters our sense of bodily integrity, our sense of ourselves as fully functional human beings, it’s natural, even expected, that our capacity to cope can be overwhelmed.

But there are pretty devastating injuries where a year later, people report having a better quality of life than before, even for severe spinal cord injuries and quadriplegia. So, we can overcome a lot, and our capacity for change, for hope, has to be taken into account.

So, how do we, as healers of mind and body, help patients make decisions about their end of life?

“I’d be hard-pressed to say that we’d ever want to give away our humanity in making decisions of high consequence.”

For someone with a chronic illness, the standard of care has those decisions happening along the way, and AI could be helpful there. But at the point of diagnosis — do I want treatment or to opt for palliation from the beginning — AI might give us a sense of what one might anticipate, how impaired we might be, whether pain can be palliated, or what the tipping point will be for an individual person.

So, the ability to have AI gather and process orders of magnitude more information than what the human mind can process — without being colored by fear, anxiety, responsibility, relational commitments — might give us a picture that could be helpful.

What about the patient who is incapacitated, with no family, no advance directives, so the decision falls to the care team?

We have to have an attitude of humility toward these decisions. Having information can be really helpful. With somebody who’s never going to regain capacity, we’re stuck with a few different options. If we really don’t know what they would like, because they’re somebody who avoided treatment and really didn’t want to be in the hospital, or didn’t have a lot of relationships, we assume that they wouldn’t have sought treatment for something that was life-ending. But we have to be aware that we’re making a lot of assumptions, even if we’re not necessarily doing the wrong thing. Having a better prognostic sense of what might happen is really important to that decision, which, again, is where AI can help.

I’m less optimistic about the use of large-language models for making capacity decisions or figuring out what somebody would have wanted. To me it’s about respect. We respect our patients and try to make our best guesses, and realize that we all are complicated, sometimes tortured, sometimes lovable, and, ideally, loved.

Are there things that AI should not be allowed to do? I’m sure it could make end-of-life recommendations versus simply gathering information.

We have to be careful where we use “is” to make an “ought” decision.

If AI told you that there is less than 5 percent chance of survival, that alone is not enough to tell us what we ought to do. If there’s been a terrible tragedy or a violent assault on someone, we would look at that 5 percent differently from someone who’s been battling a chronic illness over time and says, “I don’t want to go through this again, and I don’t want to put others through this. I’ve had a wonderful life.”

“If AI told you that there is less than 5 percent chance of survival, that alone is not enough to tell us what we ought to do.”

In diagnostic and prognostic assessments, AI has already started to outperform physicians, but that doesn’t answer the critical question of how we interpret that, in terms of what our default rules should be about human behavior.

It can help us be more transparent and accountable and respectful of each other by making it explicit that, as a society, if these things happen, unless you tell us otherwise we’re not going to resuscitate. Or we are when we think there’s a good chance of recovery.

I don’t want to underestimate AI’s potential impact, but we can’t abdicate our responsibility to center human meaning in our decisions, even when based on data.

So these decisions should always be made by humans?

“Always” is a really strong word, but I’d be hard-pressed to say that we’d ever want to give away our humanity in making decisions of high consequence.

Are there areas of medicine where people should always be involved? Should a baby’s first contact with the world always be human hands? Or should we just focus on quality of care?

I would want people around, even if a robot does the surgery because the outcome is better. We would want to maintain the human meaning of important life events.

Another question that comes up is, what will it mean to be a physician, a healer, a healthcare professional? We hold a lot of information and an information asymmetry is one of the things that has caused medical and other healthcare professionals to be held in high esteem. But it’s also about what we do with the information, being a great diagnostician, having an exemplary bedside manner, and ministering to patients at a time when they’re suffering. How do we redefine the profession when the things we thought we were best at, we may not be the best at anymore?

At some point, we may have to question human interaction in the system. Does it introduce bias, and to what extent is processing by human minds important? Is an LLM going to create new information, come up with a new diagnostic category, or a disease entity? What ought the responsibilities of patients and doctors be to each other in a hyper-technological age? Those are important questions that we need to look at.

Are those conversations happening?

Yes. In our Center for Bioethics, one of the things that we’re looking at is how does artificial intelligence look at some of our timeless challenges within health? Technology tends to go where there’s capital and resources, while LLMs and AI advances could allow us to care for swaths of the population where there’s no doctor within a day’s travel. Holding ourselves accountable on questions of equity, justice, and advancing global health is really important.

There are questions about moral leadership in medicine. How do we make sure that output from LLMs and future iterations of AIs comport with the people we think we are and the people we ought to be? How should we educate to make sure that the values of the healing professions continue to be front and center in delivering care? How do we balance the public’s health and individual health, and how does that play out in other countries?

So, when we talk about patients in under-resourced settings and about AI’s capabilities versus what it means to be human, we need to be mindful that in some parts of the world to be human is to suffer and not have access to care?

Yes, because, increasingly, we can do something about it. As we’re developing tools that can allow us to make huge differences in practical and affordable ways, we have to ask, “How do we do that and follow our values of justice, care, respect for persons? How do we make sure that we don’t abandon them when we actually have the capacity to help?”