News Page

Main Content

Doctors' growing AI deepfakes problem

Axios's profile
Original Story by Axios
May 6, 2026
Doctors' growing AI deepfakes problem

Context:

A surge of AI-generated deepfakes is brazenly using physicians’ likenesses to promote questionable products and misinformation, threatening patient trust and safety. The AMA has urged lawmakers to modernize identity protections, close legal gaps, and push platforms to remove impersonations swiftly, signaling a broader push for privacy, enforcement, and accountability. California is considering a ban on doctor deepfakes and already requires disclosures on AI ads, while Pennsylvania’s medical board demanded a cease-and-desist from a chatbot posing as a licensed doctor. The trend spans not just ads but fake diagnostic content and misused medical data, with potential fraud and cybersecurity risks. The trajectory suggests stronger regulatory and platform actions are needed to restore trust and curb misuse, even as cases raise questions about liability and response protocols.

Dive Deeper:

  • The American Medical Association called on both federal and state lawmakers to close legal gaps and modernize identity protections, framing deepfake impersonations as a public health and safety crisis and pressing for actionable guidelines for physicians’ responses and insurance coverage considerations.

  • California has begun requiring disclosures on AI-generated advertisements and is debating a measure that would explicitly ban doctor deepfakes, illustrating a state-level pivot toward disclosure and prohibition to curb misuse.

  • Pennsylvania’s medical board took enforcement action by demanding a tech company cease and desist after a chatbot falsely claimed to be a licensed physician in the state, highlighting regulatory pushback against AI impersonations.

  • Physicians report increasing instances of their identities being used to promote wellness products and unapproved devices, underscoring a widening scope of impact beyond individual reputation to consumer safety and market integrity.

  • A prominent example involves Dr. Sanjay Gupta and other clinicians whose likenesses have been used in convincing ads for dubious treatments, signaling that high-profile targets are at rising risk and that awareness is spreading.

  • Clinician-focused research indicates deepfake X-ray images can be difficult to detect, with a recent Radiology study showing a substantial portion of clinicians failing to identify fakes despite warnings, raising concerns about diagnostic integrity and patient harm.

  • Experts warn of broader threats including insurance fraud, data theft, and potential cyberattacks on hospital networks that could inject synthetic images to alter diagnoses or disrupt care, emphasizing the need for robust governance and rapid response mechanisms.

Latest News

Related Stories