Breaking News · 18 min read

Anthropic Sues US Government Over Supply Chain Risk Label

The US government has never labeled an American company a “supply chain risk.” That changed in March 2026, when the Pentagon used this tag against Anthropic, the company behind the AI assistant Claude. What followed was a chain of events that pulled in the President, the Defense Secretary, and some of the biggest names in tech. Lawsuits were filed. Billions were at stake. This is the full story of how a contract dispute turned into one of the most important legal battles the AI industry has ever seen.

2

Lawsuits filed against the US government

16

Federal agencies named in the complaint

100+

Enterprise customers expressing concern

149K

Daily Claude installs, surpassing ChatGPT

Executive Summary

  • Anthropic filed two lawsuits against the US government on March 9, 2026, after the Pentagon labeled it a “supply chain risk”
  • The dispute began when Anthropic refused to remove two safety restrictions: no lethal autonomous weapons without human oversight, and no mass surveillance of American citizens
  • The lawsuit names 16 federal agencies and raises five legal claims, including First Amendment retaliation and violation of due process
  • Microsoft, Google, Amazon, Apple, and former military officials filed legal briefs backing Anthropic
  • The company could lose billions in 2026 revenue, with over 100 enterprise customers expressing concern
  • Claude downloads surged past ChatGPT for the first time, reaching 149,000 daily US installs
  • The Pentagon is still using Claude in active military operations despite the ban

What Is the Anthropic Supply Chain Risk Dispute and Why Does It Matter

Anthropic supply chain risk dispute overview

Anthropic is one of the biggest AI companies in the United States. It builds Claude, an AI assistant that millions of people use every day. Hundreds of businesses, government agencies, and defense contractors rely on Claude for everything from writing and research to intelligence analysis. Before this dispute, Anthropic was one of the Pentagon's closest AI partners.

In early March 2026, the Pentagon placed a supply chain risk label on Anthropic. This label is a serious government tool. It is normally used against foreign adversaries like Huawei or ZTE to protect US military systems from foreign sabotage. Anthropic is the first American company to ever receive this designation.

The reason behind the label comes down to two safety rules that Anthropic refused to remove from its military contracts:

  • Claude should not be used for lethal autonomous weapons without human control.
  • Claude should not be used for mass surveillance of American citizens.

Anthropic says Claude was never built or tested for those tasks. The company argues that Claude simply cannot do those things safely or reliably right now, and that allowing it to be used that way would create serious risks for everyone involved.

Once this label was applied, any company doing business with the US military had to stop using Claude for that work. Anthropic says this acts like a ban. It did not just lose its Pentagon contracts. It started losing commercial customers too, because many large companies fear doing business with a company that has been labeled a national security risk.

Timeline of the Anthropic-Pentagon Conflict: From Partner to Blacklist

Timeline of the Anthropic-Pentagon conflict

Anthropic and the US government were close partners for nearly two years before things fell apart. The company built a special version of Claude called “Claude for Government” that ran on classified networks. It passed some of the most rigorous security audits in the federal system. Pentagon officials praised Claude's capabilities publicly.

The trouble started in the fall of 2025. The Pentagon began pushing Anthropic to drop its safety rules and allow Claude for “all lawful uses.” Anthropic pushed back, offering to expand Claude's approved uses significantly while keeping the two red lines in place.

In January 2026, Defense Secretary Pete Hegseth issued a memo directing his department to add “any lawful use” language into all AI contracts. Anthropic was the only major AI company that refused.

From there, the conflict moved fast. Here is how events unfolded week by week:

  • February 16, 2026: A Pentagon source told reporters the department was close to cutting ties with Anthropic and planned to make the company “pay a price.”
  • February 24, 2026: Hegseth met with Anthropic CEO Dario Amodei and gave him a four-day deadline to comply or face a supply chain risk label.
  • February 26, 2026: Amodei published a public statement refusing to drop the two safety rules.
  • February 27, 2026: Before the deadline expired, President Trump posted on Truth Social ordering all federal agencies to stop using Anthropic. Hours later, the Pentagon formally designated Anthropic a supply chain risk.
  • February 28, 2026: Reports emerged that the Pentagon used Claude in a major air strike on Iran just hours after announcing the ban.
  • March 4, 2026: Anthropic received a formal letter confirming the supply chain risk designation.
  • March 5, 2026: Amodei responded publicly, calling the designation narrow and saying Anthropic would challenge it in court.
  • March 9, 2026: Anthropic filed two lawsuits, one in California federal court and one in the DC appeals court.
  • March 10–11, 2026: Microsoft, Google, Amazon, Apple, OpenAI employees, and former military officials all filed legal briefs supporting Anthropic.

What started as a contract disagreement turned into one of the biggest clashes between a tech company and the US government in recent history.

What Anthropic's Lawsuit Says: The Five Legal Claims Explained

Anthropic's five legal claims against the US government

Anthropic filed its lawsuit on March 9, 2026, in the US District Court for the Northern District of California. The complaint runs 48 pages and names 16 federal agencies as defendants, from the Department of Defense to the GSA.

The lawsuit lays out five separate legal claims against the government.

Claim 1: Violation of the Administrative Procedure Act and Supply Chain Risk Law (10 USC 3252)

Anthropic argues that the supply chain risk law exists for one purpose: to stop foreign enemies from sabotaging US defense systems. It was never designed to punish an American company for setting terms on a voluntary contract. The law also requires the Pentagon to use the “least restrictive means” available to protect the supply chain. Anthropic says the Pentagon skipped several less extreme options, like renegotiating contract terms, before jumping to the most severe action it could take.

Claim 2: First Amendment Violation

Anthropic says the government retaliated against it for protected speech. The company argues that its usage policy, its public statements about Claude's limitations, and Amodei's refusal to back down are all forms of protected expression. Trump called Anthropic a “radical left, woke company” run by “leftwing nut jobs.” Hegseth accused Anthropic of “Silicon Valley ideology” and called its safety restrictions “sanctimonious.”

Claim 3: Ultra Vires (Acting Beyond Legal Authority)

Ultra vires is a legal term that means acting beyond the power you were given. Anthropic argues that no law gives the President the authority to order every federal agency to stop using a specific company's products. Trump's Truth Social post directing agencies to “immediately cease” using Anthropic had no legal basis, according to the lawsuit.

Claim 4: Fifth Amendment Due Process Violation

Anthropic says the government stripped it of contracts, business relationships, and its reputation without any notice, hearing, or chance to respond. The supply chain risk label was applied before Anthropic's deadline to respond had even expired.

Claim 5: APA Prohibition on Unauthorized Sanctions

Anthropic says the agencies that cut ties with it acted without any independent legal authority. The Treasury, the State Department, the GSA, and other agencies followed Trump's social media post without conducting their own legal reviews.

One important detail stands out in this case. Anthropic is not asking for money. The company wants the court to do three things:

  • Declare the supply chain risk label unlawful.
  • Block the government from enforcing the designation.
  • Order all agencies to reverse any actions they took against Anthropic.

The Government's Position: What the Pentagon and White House Say

The government's position in the Anthropic dispute

The government frames this dispute in simple terms. The military should be able to use AI tools for all lawful purposes, and no private company should dictate how the armed forces protect the country.

A Pentagon official stated that the military “will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical technology.” In this framing, Anthropic's safety rules are not about protecting the public. They are about a private company overstepping its role.

Pentagon Chief Technology Officer Emil Michael said on CNBC on March 12 that there is “no chance” the department renegotiates with Anthropic after the lawsuits were filed. He called the company's decision to sue “bridge-burning behavior” and said the Pentagon is moving forward with replacement AI systems.

Government officials have also made one thing clear. They say the Pentagon was not asking Anthropic to build autonomous weapons or conduct mass surveillance. The dispute, they say, is about who gets to decide what is “appropriate” use of AI in military settings — the company that built it or the government that bought it.

Officials raised another point that does not appear in Anthropic's lawsuit. They said tensions between the two sides grew after an Anthropic executive reportedly met with congressional Democrats to discuss the dispute. The White House saw this as “going to the opposition,” which escalated the political dimension of the conflict.

Government Refuses to Commit

During a court hearing on March 11, a Department of Justice lawyer representing the government refused to make two commitments: he would not promise that the government would take no further action against Anthropic, and he would not commit to stopping outreach to Anthropic's customers urging them to end their relationship with the company. When asked directly, the lawyer said he was “not prepared to offer any commitments.”

Big Tech Rallies Behind Anthropic: Who Is Supporting the Lawsuit

Major tech companies supporting Anthropic's lawsuit

Anthropic's legal fight has drawn wide support from some of the biggest names in tech. This is striking because many of these companies have been working hard to secure their own government AI contracts — contracts that become more valuable with Anthropic out of the picture.

Microsoft was the first major company to file a standalone amicus brief. An amicus brief is a legal document filed by a third party that has a strong interest in the outcome of a case. Microsoft's filing was notable because it came from a company that directly benefits from Anthropic's removal from the government AI market.

In its filing, the company argued that enforcing the supply chain risk label right away would force military contractors to rush to rebuild their products using other AI models, creating new risks for operations that depend on Claude right now.

Several other tech giants joined a separate joint amicus brief filed through the Chamber of Progress, a tech advocacy group. The companies that signed on include:

Google
Amazon
Apple
OpenAI

The brief called the supply chain risk label “a potentially ruinous sanction.” It compared the government's actions to a “temper tantrum” and warned that any AI company could face the same treatment in the future for disagreeing with a government request.

Nearly 40 employees from OpenAI and Google also filed their own brief in their personal capacities. Among them was Google chief scientist Jeff Dean. Their brief argued that AI safety restrictions like Anthropic's are not about ideology. They are standard engineering practice for systems that are still being tested and improved.

Support came from outside the tech world too. Over two dozen former high-ranking US military officials filed a separate brief. They warned that the government's approach would discourage the best AI companies from working with the military at all, weakening national security in the long run.

On the political side, Senator Kirsten Gillibrand called the designation “shortsighted, self-destructive, and a gift to our adversaries.” Senator Mark Warner released a statement saying the administration was “weaponizing procurement rules to punish a company for exercising its rights.”

Notable absence: Meta stayed out of the case entirely. The company left the Chamber of Progress in 2025 and has not commented on the Anthropic dispute.

Dean Ball, a former Trump AI policy adviser who helped shape the administration's AI Action Plan, praised Microsoft's decision to back Anthropic. He called the supply chain risk designation “extremely bad policy” and said it would “chill innovation across the entire AI industry.”

The Business Impact: What the Ban Means for Anthropic and the AI Industry

Business impact of the Anthropic supply chain risk ban

The financial damage from the supply chain risk label is already adding up. Anthropic's CFO Krishna Rao said in a court filing that the government's actions “threaten billions of dollars in 2026 revenue” and that the damage extends far beyond the Pentagon contracts themselves.

The impact showed up almost immediately after the public clash. More than 100 enterprise customers contacted Anthropic with concerns about continuing to use Claude. Dozens of those companies specifically asked about their termination rights. At least one federal contractor that had built custom applications with Claude began quietly migrating to a competitor.

Agencies That Cut Ties

  • The GSA terminated Anthropic's “OneGov” contract, cutting off its access to all three branches of the federal government.
  • The Treasury Department publicly ended all use of Claude.
  • The State Department switched its internal chatbot from Anthropic to OpenAI.
  • The Federal Housing Finance Agency dropped Anthropic products.
  • Defense contractor Lockheed Martin told its employees to stop using Claude.

Companies Standing By Anthropic

  • Microsoft, Google, and Amazon confirmed they will continue offering Claude on their cloud platforms for work that is not related to the Pentagon.

The transition away from Claude on the military side is also far from complete. Palantir CEO Alex Karp confirmed on March 12 that Claude is still being used in active military operations. The military cannot simply switch AI models overnight without risking operational failures.

The Pentagon is now looking at three AI tools to replace Claude across its systems:

Google's Gemini
OpenAI's ChatGPT
Elon Musk's Grok

Pentagon CTO Emil Michael said the six-month transition period is the plan, but exceptions will be made if sensitive operations are at risk and no other AI tool can fill the gap.

The Public Response: Claude Downloads Surge as Users Take Sides

The Pentagon dispute had an unexpected side effect. It made Claude far more popular than it had ever been.

Over the weekend of February 28 to March 1, Claude's mobile app climbed to number one on Apple's US App Store free-app chart. It overtook ChatGPT for the first time. Sensor Tower data showed the app going from sixth place on Wednesday to first place by Saturday, with daily downloads hitting 149,000 compared to ChatGPT's 124,000.

Claude's Growth by the Numbers

  • Daily active users reached 11.3 million, a 183% increase since the start of 2026
  • Free user signups increased over 60% since January
  • Paid subscribers more than doubled
  • 149,000 daily US installs vs ChatGPT's 124,000

Meanwhile, the backlash against OpenAI intensified. ChatGPT's web traffic dropped 6.5% month over month during the same period. US app uninstalls for ChatGPT jumped 295% day over day on the Friday of the Pentagon deal announcement.

The shift showed up in business spending too. According to Ramp's procurement data, 56% of organizations using a generative AI vendor now use Anthropic, up from 43% the previous quarter. That puts Claude in second place behind OpenAI for enterprise adoption, but the gap is narrowing fast.

OpenAI CEO Sam Altman acknowledged the situation publicly. He admitted the timing of his company's Pentagon deal looked “opportunistic and sloppy.”

The numbers tell a clear story. Millions of users and businesses responded to the dispute by moving toward the company that chose to push back.

What This Means for You

The Anthropic lawsuit raises fundamental questions about how AI companies, governments, and users relate to each other. Here is what matters if you use AI tools for work.

No single AI provider is immune to disruption. Whether it is a government supply chain risk label, a policy change, or a company pivot, relying on one AI tool creates fragility in your workflow. The companies and individuals who fared best through this crisis were those already using multiple AI providers.

AI safety is no longer an abstract debate. Anthropic's refusal to drop two safety rules led to billions in potential losses, lawsuits, and a political firestorm. The growing tension between AI capabilities and safety is now a business risk, a legal question, and a consumer decision — not just a research problem.

Users voted with their wallets. The surge in Claude downloads and the spike in ChatGPT uninstalls show that the market cares about how AI companies behave beyond the product itself. Trust is now a competitive advantage.

For Mac users: Elephas gives you access to Claude, GPT, Gemini, and local offline models — all from your Mac's menu bar. If any single provider faces disruption, your workflow stays intact. It's the kind of multi-provider flexibility that moments like this make essential.

The Anthropic lawsuit is now the most important legal case in the AI industry. It will determine whether the government can use procurement rules as political weapons, whether AI companies have the right to set safety limits on their own products, and whether public trust in a company's principles can translate into lasting market advantage. The court cases are just beginning. The industry is watching.

Ayush Chaturvedi
Written by

Ayush Chaturvedi

AI & Mac Productivity Expert

Ayush Chaturvedi is the co-founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Anthropic Sues Pentagon, 1.5 Million Quit ChatGPT: The AI Trust Crisis Reshaping the Industry

Anthropic filed two lawsuits against the Pentagon over its supply chain risk designation. Meanwhile, 1.5 million users quit ChatGPT in the QuitGPT boycott. Here's what it means for AI privacy.

14 min read
news

Anthropic Refuses Pentagon Demands, Gets Blacklisted — Then Claude Becomes the #1 App

Anthropic refused to grant the Pentagon unrestricted AI access, was designated a supply chain risk, and Claude surged to #1 on the App Store as ChatGPT uninstalls spiked 295%.

12 min read
news

Anthropic RSP v3.0: The Biggest Change to AI Safety Policy in Two Years

Anthropic released RSP v3.0 on February 24, 2026. Three core changes: separating company from industry commitments, a Frontier Safety Roadmap with public goals, and Risk Reports with independent external reviewers.

9 min read
news

Anthropic's Claude Used in U.S. Military's Venezuela Raid

$200M Pentagon contract at risk as AI safety rules clash with military needs.

8 min read
Back to News