Skip to main content

The Role of Informed Consent in Medical AI: Balancing Innovative Advancements With Patient Rights

The use of Artificial Intelligence (AI) in medicine is quickly becoming the norm in today’s healthcare environment. From using predictive analytics to streamlining protocols and medical record documentation, AI has the potential to expedite advancements in patient care. Yet the use of AI raises legal and ethical questions, one of which is the role of informed consent, a cornerstone of patient rights. 

Informed Consent in Healthcare

Informed consent in healthcare is a key patient right and centers around disclosure, understanding, and explicit permission to perform a procedure. A provider must properly disclose information so patients can make decisions about their medical options and procedures.¹ Patients should have a thorough understanding about a particular test, treatment, or procedure, and the associated risks and benefits, alternative options, and the potential consequences of doing nothing.² 

AI and Informed Consent—Legal and Ethical Considerations

Many healthcare institutions and medical providers have embraced and implemented AI in at least some form. Some common uses of medical AI involve patient scheduling and communication, documentation, summarization of patient data, image interpretation, and medical diagnosis. While these uses and applications of AI assist in achieving efficiency and an accuracy that can exceed human doctors, they now become a factor in the decision-making process and informed consent.³ 

The first issue to consider is transparency. Without regulations or organizational policies in place, it is not clear when or how to disclose the use of AI, which is now seemingly discretionary and may be influenced by the extent to which AI was used. Was AI a component of the clinical decision-making? Was it used to summarize the patient’s medical history? Was it used to explore differential diagnoses? From a risk management perspective, the recommendation would be to err on the side of transparency and disclose to patients the use of AI, how it is being used in care, documenting this discussion, and providing patients whenever possible, the option to “opt out.” 

While regulations have yet to catch up, erring on the side of disclosure is more likely to mitigate risk and allegations that there was no informed consent. To inform patients about the use of AI in their care, however, providers must have sufficient knowledge to explain to patients how the AI program works.² In discussing the use of AI with patients, the provider should be able to: 

  • Provide a general explanation of how the AI program works and explain their experience with the AI program.
  • Describe the rationale and validity of the AI program (accuracy, limitations, risks, etc.). 
  • Describe the role of the AI program and how it is used in conjunction with the provider (diagnosis, reading images, procedures/treatments).
  • Describe safeguards that are in place (cross-referencing), as well as privacy assurances and/or any impact to patient confidentiality. 

During these discussions, providers should also be prepared to answer patient questions and address concerns regarding the use of AI—and potentially, requests or concerns if AI is not being used. 

The second important issue is documentation. As usual, discussions with patients should be well-documented, and options should be provided to patients whenever possible regarding their preferences and comfort with the use of AI. 

Documentation becomes even more critical in the precarious circumstance in which the provider and AI program arrive at different recommendations or conclusions. The provider should clearly document their rationale and justification for their diagnosis and/or recommendation and why it deviates from the AI program. To reinforce this, Electronic Medical Record templates should be amended to provide an area for instances in which AI was disregarded. 

As the use of medical AI increases and evolves, so will the legal landscape. One significant potential implication will be the impact medical AI will have on the standard of care.

Will the future standard of care require the use of AI and the provider’s review of its recommendation? Where will the liability fall in cases of a misdiagnosis? Will we see AI “testifying” in Court? 

These questions can trigger both excitement and fear. It is important to acknowledge that while medical AI is intended to improve patient care and safety, it is not infallible and should not be immune to the foundational patient rights of informed consent, privacy, and autonomy.   

 

Yvette Ervin, JD, is a Senior Risk Management and Patient Safety Specialist. Questions or comments related to this article should be directed to YErvin@CAPphysicians.com.

References

¹Hai Jin Park, “Patient perspectives on informed consent for medical AI: A web-based experiment,” (April 30, 2024), National Library of Medicine https://ncbi.nlm.nih.gov/, https://pmc.ncbi.nlm.nih.gov/articles/PMC11064747/
 (Nov 11, 2024)

²Laura M. Cascella, MA, CPHRM, “Artificial Intelligence and Informed Consent,” MedPro Group, medpro.com, https://www.medpro.com/artificial-intelligence-informedconsent (Nov 11, 2024)

³Joe Kita, “Are you Ready for AI to Be a Better Doctor Than You,” (April 12, 2024), Medscape, Medscape.com, https://www.medscape.com/viewarticle/are-you-ready-ai-be-better-doctor-… (Nov 11, 2024)

4Graham Billingham, M.D., FACEP, FAAEM et al, “Doing It Right the First Time: Recommendations to Safely Use AI in Healthcare,” American Society for Health Care Risk Management Annual Conference, October 2024, San Diego, California