Back

    Your doctor used AI to help make a decision about your care. Did they tell you?

    Jarno Peltokangas

    Published 4/10/2026

    Probably not. And a new JAMA Perspective from Stanford argues that silence may no longer be defensible.

    Written by Prof. Michelle Mello (Stanford Law), Dr. Danton Char, and Sonnet Xu, "Ethical Obligations to Inform Patients About Use of AI Tools" does something the field has been quietly avoiding: it applies the actual logic of informed consent doctrine to AI deployment in clinical settings, and asks who bears responsibility for telling patients what's happening to them.

    The survey data the authors cite is striking. 60% of US adults say they'd be uncomfortable knowing their physician relied on AI. 70-80% have low expectations that AI will improve their care. Only one in three trust health systems to use AI responsibly. And 63% say they would definitely want to be notified when AI is guiding their care.

    This isn't a niche concern. This is the mainstream patient population that AI tools are already being quietly deployed on.

    The legal argument at the center of the paper is sharper than it might first appear. Informed consent doctrine doesn't only govern procedures - it requires disclosing anything material to a reasonable patient's decision about their care. If a meaningful share of patients would think or decide differently knowing AI was involved, that already meets the standard legal threshold for materiality. The paper makes a credible case that many current deployments are on the wrong side of that line.

    The authors propose a two-factor framework for when disclosure becomes ethically - and potentially legally - required. The first factor is risk of physical harm: how likely is an AI error to affect patient safety, and how much human oversight actually exists to catch it? The second is patient opportunity to act: can the patient do something meaningful with the information - opt out, seek a second opinion, bring their own judgment to bear? Where both factors are elevated, the authors argue that explicit consent isn't merely best practice. It may be legally required.

    There are a few governance dimensions here worth sitting with. The authors are clear that disclosure decisions should be made at the organizational level, not left to individual clinicians to navigate at the bedside. This places a real institutional accountability claim on health systems - one that most haven't begun to operationalize.

    They also flag a genuine tension: over-disclosure carries its own risks. Blanket notification about every AI tool involved in a patient's care could erode trust in recommendations that are actually improved by AI. The framework is explicitly not about reflexive transparency for its own sake. It's about proportionate disclosure tied to what actually matters to patients given their specific situation.

    The regulatory vacuum here is significant. There is currently no federal requirement in the US mandating AI disclosure to patients in clinical settings. California's AB 3030, effective January 2025, requires disclaimers on AI-generated patient-facing content - but exemptions exist when a clinician reviews the output first. A state-by-state patchwork is slowly forming, but there is no coherent national standard, and no serious federal legislative momentum toward one.

    AI is already a routine, invisible participant in clinical care - in imaging interpretation, EHR-embedded decision support, patient portal message drafting, prior authorization decisions. Patients don't know. Most don't know to ask. And the system has no obligation to tell them.

    The doctor-patient relationship is grounded in transparency as a precondition of trust. This paper makes the case, carefully and with legal specificity, that we are already overdue for a reckoning with what that means in an age of algorithmic medicine.

    Full paper: https://jamanetwork.com/journals/jama/article-abstract/2836687