Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Oncologists Express Conflicted Views About AI in Clinical Practice

Date

Patients with cancer do not need a detailed understanding of artificial intelligence (AI), but they should consent to use of AI models during treatment, according to a survey of 204 U.S. oncologists.

For the key question of patient consent, 81.4% of the oncologists said patient consent should be required for use of AI models. However, only 23% of the respondents believed that AI for treatment decisions “should be explainable by patients.” More than a third of the oncologists said patients should make the decision in a scenario wherein clinician recommendations are at odds with those of AI models.

More than 90% of the survey participants said AI developers should be responsible for medico-legal issues associated with AI use, reported Gregory Abel, MD, MPH, of Dana-Farber Cancer Institute in Boston, and co-authors in JAMA Network Open.

“One of the surprising things we saw was the sort of discordance between oncologists not feeling like it was necessary for patients to know in detail how a model works or be able to explain it to some extent, but at the same time kind of deferring to patients to make decisions when they and an AI model disagreed in this hypothetical scenario,” co-author Andrew Hantel, MD, also of Dana-Farber, told MedPage Today. “More than anything else, this probably highlights how unsure oncologists are about how to really act in relation to an AI tool when it might be available to them in the clinic.”

The emergence of AI offers the potential to advance cancer research and clinical care. The FDA’s recent approval of AI models with oncology applications, combined with the complexity of personalized cancer care, suggest the field of oncology is poised for an “AI revolution,” the authors noted. At the same time, concerns have arisen about AI bias, the ability to explain how a result was determined, responsibility for errors or misuse, and humans’ deference to AI-based results.

To obtain a first look at oncologists’ views about AI and its ethical implications, Hantel and colleagues prepared a 24-question survey with input from oncologists, survey experts, bioethicists, and AI researchers. They distributed the survey to a random sample of 399 oncologists identified by the National Plan & Provider Enumeration System. Subsequently, 204 usable surveys were returned for analysis.

The survey respondents represented 37 states. About 63% each identified as male and white, and 29.4% were from academic centers. Additionally, 53.4% of the oncologists had no prior training in AI, and 45.3% said they were familiar with clinical decision models. More than 90% of the respondents said they could benefit from dedicated training, but three-fourths were unaware of appropriate resources.

Few participants said prognostic (13.2%) and clinical decision (7.8%) AI models could be used clinically only when researchers could explain them. More than 80% of respondents said AI models should be explainable by oncologists, whereas 13.8% and 23.0%, respectively, said prognostic and clinical models should be explainable by patients.

The questionnaire included a hypothetical scenario wherein an FDA-approved AI model selected a different treatment regimen from the one the oncologist intended to recommend. Most often (36.8%), the oncologists said they would present both options and let the patient decide. The authors found a significant relationship between practice setting and the recommended option. Respondents from academic settings were more likely to choose the AI recommendation (OR 2.99, 95% CI 1.39-6.47, P=0.004) over their own or leave the decision to the patient (OR 2.56, 95% CI 1.19-5.51, P=0.02).

A majority of the oncologists said patients should consent to use of AI models, substantially more for treatment decisions (81.4%) than diagnostic decisions (56.4%). Oncologists in non-academic settings were more likely to support patient consent for use of AI models in treatment decisions (OR 2.39, 95% CI 1.13-5.06), as well as those with no prior AI training (OR 2.81, 95% CI 1.32-6.00).

Three-fourths of respondents said oncologists should protect patients from biased AI, whereas 27.9% were confident they could identify how representative the data in an AI model were, including two-thirds of respondents who said oncologists should protect patients from AI bias. Oncologists in academic settings were more likely to express confidence in their ability to identify representative AI (OR 2.73, 95% CI 1.43-5.23).

Most respondents (90.7%) said AI developers should be responsible for medico-legal problems with AI, 47.1% said physicians should share the responsibility, and 43.1% said hospitals should share the responsibility.

The current lack of guidelines, legal precedents, and regulatory statutes for AI use in clinical practice may have contributed to the apparent conflict between some of the oncologists’ answers.

“The legal structure and the way medicine is practiced in the U.S., physicians are kind of responsible for decisions,” said Hantel. “If it becomes the norm that an artificial intelligence tool is part of standard of care, then that [begs] the question, ‘Am I responsible if I go along with [the AI recommendation] and it’s wrong, or if I don’t go along with it, and it’s right?'”

“I don’t think anybody is fully prepared to deal with [those situations],” he continued. “I don’t think there are any large-language models that have been approved that would put us in those situations. We wanted this survey to help us kind of understand some of these things, where these uncertainties and conflicts and issues might come up in the very near future.”

  • Charles Bankhead is senior editor for oncology and also covers urology, dermatology, and ophthalmology. He joined MedPage Today in 2007. Follow

Disclosures

The study was supported by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund for Population Sciences, and the Mark Foundation Emerging Leader Award.

Hantel disclosed relationships with AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.

Abel reported no relevant relationships with industry.

Co-authors reported multiple relationships with industry.

Primary Source

JAMA Network Open

Source Reference: Hantel A, et al “Perspectives of oncologists on the ethical implications of using artificial intelligence for cancer care” JAMA Netw Open 2024; DOI: 10.1001/jamanetworkopen.2024.4077.

Please enable JavaScript to view the

comments powered by Disqus.

Facebook
Twitter
Reddit
LinkedIn
Email

More
articles

Join DBN Today!

Let DBN help guide you to success!

Doctors Business Network offers everything new and existing health care providers need to establish and build a successful career! Sign up with DBN today and let us help you succeed!

DBN Health News