Anthropic Refuses Pentagon's AI Demands, Gets Blacklisted — Then Claude Becomes the World's Most Downloaded AI App
On February 27, 2026, Anthropic CEO Dario Amodei told the Pentagon his company “cannot in good conscience” grant the military unrestricted access to Claude. Within hours, President Trump ordered every federal agency to stop using Anthropic's technology. Within days, Claude was the number one app in the world. This is the story of how an AI company's refusal to bend became the biggest consumer tech story of the year.
#1
App Store rank in 20+ countries
295%
ChatGPT uninstall spike
900+
Google & OpenAI employees signed open letter
3x
Claude daily active users since Jan 2026
Executive Summary
- Anthropic refused to let the Pentagon use Claude for mass domestic surveillance or fully autonomous weapons without human oversight
- The DOD designated Anthropic a “supply chain risk” — a label usually reserved for foreign adversaries like Huawei
- Claude surged from outside the top 100 to #1 on the App Store, beating ChatGPT and Gemini
- ChatGPT saw a 295% spike in uninstalls and a 775% surge in 1-star reviews
- OpenAI CEO Sam Altman admitted his Pentagon deal “looked opportunistic and sloppy”
- Over 900 employees at Google and OpenAI signed an open letter backing Anthropic's stance
- Google, Microsoft, and Amazon confirmed they will continue offering Claude outside defense contracts
The Ultimatum: What the Pentagon Demanded
The conflict started with a contract. In July 2025, Anthropic signed a $200 million deal with the Department of Defense, making Claude the first major AI model deployed on the government's classified networks. The system was already involved in military operations, including the Venezuela raid earlier that year. The contract included specific restrictions: Claude would not be used for mass surveillance of American citizens or for fully autonomous lethal weapons operating without human oversight.
But by late February 2026, the Pentagon wanted those restrictions removed. Defense Secretary Pete Hegseth gave Anthropic a Friday deadline: agree to let the military use Claude for any lawful purpose, without guardrails, or face consequences. The Department of Defense argued that national security required unrestricted access to the most capable AI systems available.
Amodei responded in a public statement on February 26: “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values. These threats do not change our position: we cannot in good conscience accede to their request.”
Anthropic drew two specific red lines. First, no mass domestic surveillance — the company would not allow Claude to be used for blanket monitoring of American citizens. Second, no fully autonomous weapons — Claude would not be deployed in weapons systems that can select and engage targets without a human making the final decision. These red lines aligned with the company's RSP v3.0 safety policy released just days earlier on February 24. Everything else, Anthropic said, was on the table.
The Fallout: Blacklisted as a Supply Chain Risk
Anthropic did not budge. On February 27, President Trump posted on Truth Social directing “EVERY Federal Agency” to “IMMEDIATELY CEASE” using Anthropic's technology. Defense Secretary Hegseth followed by formally designating Anthropic a supply chain risk — a classification previously reserved for foreign adversaries like the Chinese tech company Huawei.
The designation meant the military would phase out Claude over six months. According to CBS News, three cabinet-level agencies — the State Department, Treasury, and HHS — began switching from Claude to ChatGPT Enterprise and Google Gemini.
The Iran Paradox
In a striking contradiction, the Washington Post reported that Claude was being actively used in U.S. military operations in Iran at the same time the government was declaring it a national security risk. The government was simultaneously arguing that Claude was so vital to military operations it couldn't tolerate contractual restrictions — while also claiming the same technology posed a supply chain threat.
The political rhetoric escalated quickly. This came just weeks after Anthropic had run a Super Bowl ad mocking ChatGPT's ads, already drawing attention to the company's willingness to take public stances. Hegseth called Anthropic “sanctimonious.” Trump called the company “radical left” and “woke.” White House AI czar David Sacks, who had previously accused Anthropic of supporting “woke AI,” piled on. But in a CBS News interview aired on March 1, Amodei pushed back: “We are patriots. Everything we have done was for this country.”
The Public Response: Claude Hits #1, ChatGPT Uninstalls Surge
What happened next caught almost everyone off guard. Rather than hurting Anthropic, the blacklisting triggered a massive wave of public support.
According to TechCrunch, Claude climbed from 124th in the Apple App Store on January 28 to #1 by Saturday, March 1. Sensor Tower data showed the app went from sixth on Wednesday to fourth on Thursday to first on Saturday, beating both ChatGPT and Google Gemini.
Claude's Growth by the Numbers
- Daily downloads: 149,000 vs ChatGPT's 124,000 (as of March 2)
- Free user signups increased over 60% since January
- Paid subscribers more than doubled since January
- Daily active users tripled since the start of 2026
- Web traffic up 297.7% year-over-year
- #1 AI app in 20+ countries on Apple's App Store
ChatGPT's Backlash by the Numbers
- App uninstalls surged 295% day-over-day on February 28 (vs. typical 9% rate)
- 1-star reviews surged 775% on March 1
- Over 5,000 1-star reviews on March 2 alone (480% above normal)
- 5-star reviews dropped 42%
- Web traffic down 6.5% month-over-month in February
The support extended beyond downloads. According to the Washington Post, encouraging messages like “Thank you” covered the sidewalks outside Anthropic's San Francisco offices. Demand was so high that Claude briefly went down on Monday morning, which Anthropic attributed to “unprecedented demand.”
Important context: ChatGPT still dominates the market with 250.5 million daily active users across iOS and Android as of March 2. Claude's audience remains roughly 60x smaller. The downloads surge is real and significant, but it's too early to call it a permanent market shift.
OpenAI's Misstep: “Opportunistic and Sloppy”
Within hours of Anthropic being blacklisted on Friday, February 27, OpenAI announced its own deal with the Department of Defense. The timing could not have looked worse.
The backlash was swift. OpenAI employees publicly criticized the move — adding to the growing exodus of alignment researchers from the company. Users uninstalled ChatGPT in protest. And by Monday, CEO Sam Altman was in damage control mode.
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” Altman admitted. “We shouldn't have rushed to get this out on Friday.”
OpenAI amended the contract to include new language about its principles on surveillance. Altman also publicly said the Anthropic supply chain risk designation was “a very bad decision” and called it “an extremely scary precedent,” hoping it would be reversed.
The episode also prompted Amodei to “directly apologize” for an internal memo he had sent Anthropic staff that attacked OpenAI's behavior and suggested Anthropic was being punished for not giving “dictator-like praise” to the administration.
The OpenAI-Pentagon deal also came amid growing concern about ChatGPT's move to advertising and AI privacy risks more broadly, compounding the public's unease with OpenAI's direction.
The Industry Rallies: “We Will Not Be Divided”
Support for Anthropic came from unexpected places — including the employees of its direct competitors.
An open letter titled “We Will Not Be Divided,” hosted at notdivided.org, opposed the use of advanced AI for domestic mass surveillance or fully autonomous weapons without human oversight. It gathered signatures rapidly: 537 from Google employees, 89 from OpenAI, and nearly 900 total by week's end. The hashtags #NoWarAI and #AIEthics trended on X.
Big Tech Stands Behind Anthropic Commercially
- Google confirmed Anthropic remains available outside defense projects and expanded its cloud partnership with up to 1 million TPUs
- Microsoft said Anthropic technology will stay available through M365, GitHub, and Azure AI Foundry
- Amazon released a similar statement supporting continued availability through AWS
Anthropic CEO Amodei clarified that the supply chain risk designation “doesn't (and can't) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts.” The company's commercial operations, in other words, remain intact.
What Happens Next: Legal Challenge and Negotiations
Anthropic has stated it will challenge the supply chain risk designation in court. Legal analysts at Lawfare have argued the designation may not hold up, noting it was designed by Congress for foreign adversaries — not for a domestic company in a contractual disagreement.
According to the Financial Times, Amodei is back at the negotiating table with Emil Michael, under-secretary of defense for research and engineering. However, Michael pushed back publicly on X: “I want to end all speculation: there is no active @DeptofWar negotiation with @AnthropicAI.”
Key Questions Going Forward
- Will Anthropic's legal challenge succeed in overturning the supply chain risk designation?
- Can the military replace Claude on classified networks within the six-month timeline?
- Will the consumer growth surge translate into lasting market share for Claude?
- Does this set a precedent for how AI companies negotiate with governments?
- Will other AI companies adopt similar red lines on military use?
What This Means for You
If you use AI productivity tools for work, here is what to keep in mind right now.
Claude is not going anywhere for consumers. The supply chain risk designation only affects Department of Defense contracts. Google, Microsoft, and Amazon have all confirmed Claude remains available through their platforms. Anthropic's commercial operations are unaffected. If anything, the surge in signups has strengthened the company's consumer position.
This reinforces why AI provider diversity matters. The situation is a reminder that relying on a single AI provider creates risk, whether from policy changes, government action, or business decisions. Tools that let you work across multiple AI providers — using Claude for some tasks, GPT for others, and local models for sensitive work — give you the flexibility to adapt regardless of what happens in Washington.
The AI ethics conversation now has commercial consequences. For the first time, consumers voted with their wallets on AI ethics at scale. The 295% ChatGPT uninstall spike and Claude's surge show that users care about how their AI tools are used beyond their own screens. This follows a pattern we've tracked, from AI safety researchers leaving major companies to the $285B stock selloff triggered by Claude Cowork. Companies building AI products will need to take public trust seriously — not just capability.
For Mac users: Elephas gives you access to Claude, GPT, Gemini, and local offline models — all from your Mac's menu bar. If any single provider faces disruption, your workflow stays intact. It's the kind of multi-provider flexibility that moments like this make essential.
The Anthropic-Pentagon standoff is far from over. The legal battle is just beginning, the negotiation status is unclear, and the long-term market effects remain to be seen. But one thing is already clear: for the first time, an AI company bet its most important government contract on principles — and the public rewarded it. Whether that reward lasts, and whether it changes how other AI companies behave, is the story to watch in the months ahead.
Related Resources
Anthropic Sues US Government Over Supply Chain Risk Label
18 min readnewsAnthropic Sues Pentagon, 1.5 Million Quit ChatGPT: The AI Trust Crisis Reshaping the Industry
14 min readnewsAnthropic RSP v3.0: The Biggest Change to AI Safety Policy in Two Years
9 min readnewsAnthropic's Claude Used in U.S. Military's Venezuela Raid
8 min readSources
- Breaking Defense: Anthropic says it “cannot in good conscience” agree to Pentagon demands (February 26, 2026)
- NPR: Pentagon labels Anthropic a supply chain risk “effective immediately” (March 6, 2026)
- TechCrunch: Claude rises to #1 in the App Store following Pentagon dispute (March 1, 2026)
- TechCrunch: ChatGPT uninstalls surged by 295% after DoD deal (March 2, 2026)
- CNBC: OpenAI's Altman admits defense deal “looked opportunistic and sloppy” (March 3, 2026)
- Washington Post: Anthropic's fight with the Pentagon made Claude hugely popular (March 6, 2026)
- Washington Post: Pentagon leverages AI in Iran strikes amid feud with Anthropic (March 4, 2026)
- CNBC: Google says Anthropic remains available outside of defense projects (March 6, 2026)
- Lawfare: Pentagon's Anthropic designation won't survive first contact with legal system
- TechCrunch: Claude's consumer growth surge continues after Pentagon deal debacle (March 6, 2026)
- Dario Amodei's first interview after Pentagon blacklist: “We Are Patriots”
