AI Privacy IncidentMarch 25, 2026

EFF FOIA lawsuit over Medicare WISeR AI prior-authorization program

Vendor: Centers for Medicare & Medicaid Services (CMS); AI vendors undisclosed
Product: WISeR (Wasteful and Inappropriate Service Reduction)
Severity: high
Status: ongoing
Users affected: approximately 6.4 million Medicare beneficiaries across six states

Summary

On March 25, 2026, the Electronic Frontier Foundation filed a Freedom of Information Act lawsuit against the Centers for Medicare & Medicaid Services seeking records on WISeR, an AI prior-authorization program affecting roughly 6.4 million Medicare beneficiaries across six states. EFF alleges that vendors receive compensation scaling with denial volumes and contends this creates a risk of discriminatory delay or denial of care. CMS has not publicly named the AI vendors or disclosed the program's testing and audit records.

What happened

  • CMS launched WISeR in January 2026 as a multi-state program that uses algorithms to evaluate prior-authorization requests for Medicare-covered services.
  • CMS Administrator Mehmet Oz announced the program in 2025.
  • Within weeks of the January 2026 launch, healthcare providers in the participating states reported delays in care approval, communication gaps, and administrative strain.
  • On March 25, 2026, EFF filed a FOIA lawsuit against CMS with the assistance of Stanford Law School's Intellectual Property clinic.
  • The complaint seeks agreements with software vendors, testing records covering accuracy, bias, and hallucinations, and audit and monitoring data for WISeR and participating vendors.

Timeline

  • 2025 - CMS announces the WISeR program.
  • 2026-01 - WISeR goes live in six states covering roughly 6.4 million beneficiaries.
  • 2026-01 to 2026-03 - Providers report delays and administrative strain tied to the program.
  • 2026-03-25 - EFF files FOIA lawsuit against CMS.

What remains unclear

  • CMS has not publicly named the AI vendors operating WISeR's algorithmic components.
  • The specific compensation structure for vendors has not been disclosed by CMS; EFF reports incentives of up to 20 percent of identified savings.
  • No accuracy, bias, or hallucination testing records for WISeR's algorithms are publicly available.
  • The rate at which WISeR recommends denial or delay of prior-authorization requests has not been published.
  • CMS had not responded publicly to the FOIA lawsuit at the time of filing.

Broader context

AI systems that influence access to regulated healthcare sit at a particularly difficult point on the transparency spectrum: their outputs can affect millions of patients, the underlying vendor contracts are typically shielded as procurement confidential, and the training and evaluation data are rarely part of the public record. The WISeR lawsuit is an early test of whether standard public-records mechanisms can surface enough detail about a government-deployed AI program to make meaningful review possible.

Sources

Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Lovable Hacked: API Flaw Exposes Thousands of Projects on the Lovable AI App Builder

A security researcher exposed a Lovable API flaw that leaked source code, AI chat histories and database credentials across thousands of projects. Lovable denies data was breached; its apology reveals a February 2026 backend regression.

13 min read
news

ChatGPT Launches Ads as Privacy Researcher Resigns from OpenAI

A growing wave of AI safety researchers are leaving major companies as ChatGPT goes ad-supported.

6 min read
news

Claude Mythos Preview: First AI to Complete a 32-Step Autonomous Cyber Attack (AISI 2026)

The UK AI Security Institute evaluated Claude Mythos Preview and found the first AI model to autonomously complete a 32-step corporate network attack. Full analysis and defender guidance.

12 min read
news

Anthropic Leaked Their Source Code Twice in One Week

512,000+ lines of Claude Code leaked via npm. Days earlier, 3,000 internal files were publicly accessible. Unreleased features, security risks, and what it means for AI privacy.

14 min read