AI on Trial: Who’s Liable When Clinical Algorithms Go Wrong?
Original Article: https://www.medscape.com/viewarticle/artificial-intelligence-trial-whos-liable-when-clinical-2026a10002tk
The Issue
As artificial intelligence (AI) becomes more common in healthcare, it is increasingly used to support diagnosis, risk prediction, and treatment decisions. When an AI system contributes to a clinical error or patient harm, it raises an important question: who is legally responsible? Current laws were not designed for AI-driven care, leaving uncertainty about how liability should be assigned.
Who Could Be Held Responsible
The article explains that physicians will continue to carry most of the legal risk because they are considered the final decision-makers, even when they rely on AI tools. Health systems may also face liability if they fail to properly evaluate, monitor, or train clinicians on the use of AI systems. Technology companies that develop clinical algorithms often limit their legal exposure through contracts, making it harder to hold them responsible unless there is a clear product defect.
Why Liability Is Complicated
Clinical AI involves multiple parties, including clinicians, hospitals, and software developers. Errors involving AI could be handled as medical malpractice, product liability, or negligence cases, depending on the situation. Because courts have limited experience with these cases, decisions are likely to vary, and there is no clear, consistent legal standard yet.
Why This Matters
Unclear liability can affect whether clinicians and health systems are willing to adopt AI tools. If doctors fear being held responsible for mistakes made by algorithms they did not design, they may be reluctant to use them. At the same time, patients may face challenges seeking accountability when harm occurs. Clarifying liability rules will be important as AI becomes more integrated into clinical care.

