Breaking NewsUpdated · 14 min read

Anthropic Sues the Pentagon, 1.5 Million Quit ChatGPT: Inside the AI Trust Crisis Reshaping the Industry

On March 9, 2026, Anthropic took the extraordinary step of suing the U.S. government over the Pentagon's “supply chain risk” designation. The same week, over 1.5 million users cancelled their ChatGPT subscriptions in a boycott called QuitGPT. Microsoft, Nvidia, and Google rallied behind Anthropic. For the first time, consumers are choosing AI tools based on trust — not just features.

2

Lawsuits filed against Pentagon

1.5M

ChatGPT subscriptions cancelled

$B+

Potential Anthropic revenue at risk

30+

Google & OpenAI staff filed amicus briefs

What You Need to Know

  • Anthropic filed two federal lawsuits on March 9 challenging its “supply chain risk” designation, claiming First and Fifth Amendment violations
  • The QuitGPT boycott has reached 1.5 million cancelled ChatGPT subscriptions and 2.5 million total supporters
  • Microsoft filed an amicus brief supporting Anthropic; 30+ Google DeepMind and OpenAI employees (including Jeff Dean) did the same
  • A preliminary injunction hearing is set for March 24 before Judge Rita F. Lin in San Francisco
  • Anthropic says the designation could reduce its 2026 revenue by “multiple billions of dollars”
  • OpenAI's head of robotics Caitlin Kalinowski resigned on March 7 over the Pentagon deal
  • A major AI accountability march is planned for March 21 in San Francisco and other cities

The Lawsuits: Anthropic Takes the Pentagon to Court

On Sunday, March 9, Anthropic filed two federal lawsuits challenging the Pentagon's decision to designate the AI company a “supply chain risk” — a classification previously reserved for foreign adversaries like the Chinese tech company Huawei. The first lawsuit was filed in the U.S. District Court for the Northern District of California. The second, targeting a different statutory basis, was filed in the D.C. Circuit Court of Appeals.

The lawsuits mark the first time a major American tech company has sued the U.S. government over an AI policy dispute. According to CNBC, Anthropic is arguing two constitutional violations: the First Amendment, claiming the designation punishes the company for publicly advocating AI safety guardrails; and the Fifth Amendment, alleging the government imposed severe financial penalties without offering Anthropic a meaningful opportunity to respond.

“Across Anthropic's entire business, and adjusting for how likely any given customer is to take a maximal reading, the government's actions could reduce Anthropic's 2026 revenue by multiple billions of dollars.”

— From Anthropic's court filing, reported by Bloomberg

A federal judge in Northern California is scheduled to hear Anthropic's request for a preliminary injunction on March 24 before Judge Rita F. Lin. If granted, the injunction would prevent the government from enforcing the designation while the case proceeds.

“Disagreeing with the government is the most American thing in the world. And we are patriots. In everything we have done here, we have stood up for the values of this country.”

— Dario Amodei, Anthropic CEO, in a CBS News exclusive interview

Timeline: How We Got Here

July 2025

Anthropic signs $200M Pentagon contract

Claude becomes the first major AI model deployed on the government's classified networks, with contractual restrictions on mass surveillance and autonomous weapons.

February 24, 2026

Anthropic releases RSP v3.0

The updated Responsible Scaling Policy formalizes Anthropic's red lines on surveillance and autonomous weapons — the same red lines the Pentagon wanted removed.

February 26, 2026

Pentagon gives Anthropic a Friday deadline

Defense Secretary Hegseth demands Anthropic agree to 'all lawful purposes' use of Claude without guardrails, or face consequences.

February 27, 2026

Trump orders federal agencies to stop using Anthropic

After Anthropic refuses to budge, President Trump directs 'EVERY Federal Agency' to 'IMMEDIATELY CEASE' using Anthropic's technology via Truth Social.

March 3, 2026

QuitGPT protest at OpenAI HQ

Thousands of demonstrators gather outside OpenAI's San Francisco headquarters. The QuitGPT movement surges past 1.5 million cancelled subscriptions after OpenAI signs a Pentagon deal just hours after Anthropic's blacklisting.

March 6, 2026

Pentagon formally designates Anthropic a supply chain risk

The DOD applies the same label used for Huawei and other foreign adversaries, requiring defense contractors to certify they don't use Claude.

March 7, 2026

OpenAI robotics lead resigns over Pentagon deal

Caitlin Kalinowski, OpenAI's head of robotics and consumer hardware, publicly resigns. 'Surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.'

March 9, 2026

Anthropic files two federal lawsuits

Suits filed in Northern California and D.C. Circuit, citing First and Fifth Amendment violations. Microsoft, Jeff Dean, and 30+ industry figures file amicus briefs.

March 10, 2026

Status conference and TRO hearing

Initial hearing held in San Francisco. Google deepens its own Pentagon AI relationship the same day.

March 24, 2026

Preliminary injunction hearing

Judge Rita F. Lin will hear Anthropic's full motion at 1:30 PM PT in San Francisco. If granted, the injunction blocks enforcement of the supply chain risk designation while the case proceeds.

QuitGPT: The Consumer Revolt No One Predicted

While Anthropic prepared its legal strategy, something unprecedented happened on the consumer side. A grassroots boycott called QuitGPT erupted after OpenAI signed a Pentagon contract just hours after the Trump administration blacklisted Anthropic for refusing the same arrangement. The optics were devastating: one company took a principled stand and got punished; its competitor rushed in to profit from the moment.

1.5M

ChatGPT subscriptions cancelled

295%

Spike in ChatGPT uninstalls

2.5M

Total QuitGPT supporters

On March 3, thousands of protesters gathered outside OpenAI's San Francisco headquarters in one of the largest tech industry demonstrations in recent memory. Signs read “Trust is not a feature — it's a requirement” and “Your AI, their surveillance.” The movement quickly spread online, with ChatGPT uninstalls spiking 295% in a single day and a 775% surge in 1-star reviews.

“We shouldn't have rushed to get this out on Friday. The issues are super complex and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”

— Sam Altman, OpenAI CEO, in an internal memo shared publicly

The damage to OpenAI was swift. Claude surpassed ChatGPT in U.S. daily downloads for the first time, climbing to #1 on the Apple App Store in over 15 countries. Anthropic reported 11 million daily active users, with paid subscribers more than doubling since January. Altman acknowledged the misstep publicly, calling the timing of the Pentagon deal “opportunistic and sloppy.” OpenAI is now reportedly renegotiating its Pentagon terms toward positions closer to Anthropic's original stance.

A larger AI accountability march is planned for March 21 in San Francisco and several other cities, suggesting the consumer movement is far from over.

Industry Solidarity: Competitors Unite Against the Designation

What makes this moment unique in tech history is the scale of industry solidarity. Companies that compete fiercely for market share are publicly backing Anthropic's legal fight.

Who's Supporting Anthropic

  • Microsoft Filed an amicus brief supporting Anthropic's lawsuit, despite being OpenAI's largest investor
  • Jeff Dean (Google DeepMind chief scientist) — among 30+ OpenAI and Google employees who filed a statement in their personal capacities
  • Tech industry coalition including Nvidia, Google, and Anthropic — sent a letter to Defense Secretary Hegseth opposing the designation
  • Caitlin Kalinowski (OpenAI's head of robotics) — resigned publicly on March 7, saying the Pentagon deal “deserved more deliberation”
  • 900+ employees across OpenAI and Google signed an open letter demanding their employers reject Pentagon surveillance contracts

According to NPR, the coalition's concern extends beyond Anthropic: if the government can designate a domestic AI company a supply chain risk for exercising contractual rights, any tech company could face the same treatment. The precedent, industry leaders argue, would have a chilling effect on all AI safety advocacy.

Context: The Two Red Lines That Started Everything

The conflict traces back to a $200 million contract Anthropic signed with the Department of Defense in July 2025, making Claude the first major AI model deployed on the government's classified networks. That contract included two specific restrictions, which Anthropic has consistently maintained:

Red Line 1: No Mass Surveillance

Claude cannot be used for blanket monitoring of American citizens — rejecting the kind of dragnet domestic surveillance programs that have drawn bipartisan criticism since the Snowden revelations.

Red Line 2: No Autonomous Weapons

Claude cannot be deployed in weapons systems that select and engage targets without a human making the final decision — a position aligned with international calls for meaningful human control over lethal force.

The Pentagon wanted these restrictions removed, arguing it needed “all lawful purposes” access to Claude and that national security could not be subject to private-company veto power. When Anthropic refused to budge, the escalation was rapid: Trump's Truth Social post on February 27, the formal supply chain risk designation on March 6, and now the lawsuits.

This all came in the wake of Anthropic's RSP v3.0 policy release on February 24, which formalized the company's safety commitments, and just weeks after Anthropic ran a Super Bowl ad mocking ChatGPT's ads — already signaling Anthropic's willingness to take strong public stances.

What This Means: Trust Is the New Differentiator

The combined impact of Anthropic's lawsuits and the QuitGPT boycott represents a turning point for the AI industry. For the first time, consumers are choosing AI tools based primarily on trust and ethics, not features or price. The implications are significant:

Three Industry Shifts Happening Right Now

  1. Ethics as market advantage. Anthropic's principled stance drove it to #1 on the App Store. Companies can no longer treat safety commitments as PR — users are watching.
  2. Government AI contracts carry brand risk. OpenAI's Pentagon rush-deal cost it 1.5 million subscribers. Future government partnerships will be scrutinized by consumers.
  3. Privacy-first tools gain relevance. When users can't trust what happens with their data in the cloud, on-device processing becomes a genuine differentiator.

Legal experts have noted that the supply chain risk designation may not survive court challenge. As Axios reported, the designation was designed for foreign adversaries — not domestic companies in contractual disputes. The First Amendment argument is particularly novel: can the government punish a company for publicly advocating AI safety positions?

Meanwhile, Google, Microsoft, and Amazon have all confirmed they will continue offering Claude through their cloud platforms outside of defense contracts. The designation's practical scope may be narrower than its political impact suggests — but the reputational damage to OpenAI from the consumer backlash appears far more lasting.

What To Do Now: Choosing AI Tools in the Trust Era

This crisis has brought a question to the forefront that many users had been ignoring: how much do you trust the company behind your AI tools? Here are some practical steps:

Practical Steps for AI Users

  1. Audit your AI stack. Which of your AI tools send data to the cloud? Which company operates the server? What are their data retention policies?
  2. Read the safety policies. Companies like Anthropic publish their policies publicly (like RSP v3.0). Check whether your AI provider has published equivalent commitments.
  3. Consider on-device alternatives. For sensitive work — legal documents, client data, medical records, financial information — tools that process locally on your device eliminate the question of third-party access entirely.
  4. Follow the court case. Wednesday's injunction hearing (March 24) could set legal precedent for how AI companies interact with government demands. The outcome will affect the entire industry.

For Mac users working with confidential material, Elephas offers AI writing and knowledge management that works directly on your device. When your documents never leave your Mac, the question of which company has access to your data — or which government can demand it — simply doesn't apply.

What to Watch Next

March 24: Preliminary Injunction Hearing

Judge Rita F. Lin will hear Anthropic's motion at 1:30 PM PT in San Francisco (Anthropic PBC v. U.S. Department of War, 3:26-cv-01996). The Pentagon has until March 17 to file its opposition; Anthropic's reply is due March 20.

March 21: AI Accountability March

The QuitGPT movement is organizing a major demonstration in San Francisco and other cities, expected to draw significantly larger crowds than the March 3 protest.

This story is developing rapidly. We'll continue to cover the court proceedings, OpenAI's response, and the broader implications for AI users. The questions being raised — about trust, surveillance, autonomy, and who controls AI — are ones that every person who uses AI tools will eventually have to answer.

Sources

Want AI That Never Leaves Your Mac?

When trust in cloud AI is uncertain, on-device AI is the answer. Elephas processes everything locally on your Mac — your documents, your data, your privacy.

Try Elephas Free
Ayush Chaturvedi
Written by

Ayush Chaturvedi

AI & Mac Productivity Expert

Ayush Chaturvedi is the co-founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

Explore all AI Privacy & Security resources
news

ChatGPT Launches Ads as Privacy Researcher Resigns from OpenAI

A growing wave of AI safety researchers are leaving major companies as ChatGPT goes ad-supported.

6 min read
news

Anthropic's Super Bowl Ad Mocks ChatGPT's Ads — Why Ad-Free AI Matters

Anthropic spent $10M+ on Super Bowl LX ads mocking OpenAI's decision to put ads in ChatGPT. Here's what it means for the future of AI.

8 min read
news

Anthropic Sues US Government Over Supply Chain Risk Label

Anthropic filed two lawsuits against the US government after the Pentagon labeled it a supply chain risk for refusing to remove AI safety restrictions from military contracts. Full timeline, legal claims, and industry impact.

18 min read
news

Anthropic Refuses Pentagon Demands, Gets Blacklisted — Then Claude Becomes the #1 App

Anthropic refused to grant the Pentagon unrestricted AI access, was designated a supply chain risk, and Claude surged to #1 on the App Store as ChatGPT uninstalls spiked 295%.

12 min read