Breaking NewsMarch 28, 2026 · 7 min read

Anthropic Accidentally Leaks Claude Mythos: A New AI Model Tier That Represents a "Step Change" in Capabilities

On March 26, 2026, a configuration error exposed Anthropic's most powerful unreleased model—Claude Mythos. The company confirms it sits above the Opus tier and has dramatically higher benchmark scores, but warns of unprecedented cybersecurity risks.

~3,000

assets exposed in leak

4th Tier

above Opus (Capybara)

Step Change

in AI performance

Early Access

rollout status

What Happened

Security researchers discovered that a configuration error in Anthropic's content management system had made close to 3,000 unpublished assets publicly accessible—including draft blog posts about an unreleased model called Claude Mythos. After Fortune reported the leak on March 26, Anthropic removed public access and confirmed the model exists, calling it "a step change" in AI performance and "the most capable we've built to date."

The Details: What Claude Mythos Actually Is

According to leaked documents and Anthropic's subsequent confirmation, Claude Mythos represents a new model tier called Capybara that sits above the company's existing flagship Opus line. Anthropic currently offers three model tiers—Haiku, Sonnet, and Opus—and Capybara would add a fourth, more powerful and more expensive option.

The leaked drafts describe Claude Mythos as "larger and more intelligent" than any model the company has previously built, with dramatically higher scores on benchmarks for software coding, academic reasoning, and cybersecurity compared to Claude Opus 4.6.

Key Details from the Leak

  • Model name: Claude Mythos (product tier: Capybara)
  • Position: Above Opus—a new fourth tier
  • Performance: Dramatically higher benchmark scores across coding, reasoning, and cybersecurity
  • Status: Being trialed by early-access customers
  • Cost: Anthropic says it is "very expensive to serve" and they are working on efficiency before general release
  • Rollout: Deliberately slow, security-focused approach

The leak was discovered by cybersecurity researchers Alexandre Pauwels from the University of Cambridge and Roy Paz, a Senior AI Security Researcher at LayerX Security. They found the unsecured data store and reported it to Fortune, which published the story on March 26.

Context: Why This Matters Right Now

The Mythos leak comes at a particularly turbulent time for Anthropic. The company is simultaneously fighting a legal battle with the Pentagon over its refusal to remove AI safety guardrails for military use, and a federal judge just blocked the government's attempt to blacklist the company as a "supply chain risk."

The timing creates an unusual narrative: Anthropic is arguing in court that AI safety restrictions are essential, while its own leaked documents warn that its next model poses "unprecedented cybersecurity risks." This tension underscores the dual-use challenge facing every frontier AI lab—the same capabilities that make models useful also make them potentially dangerous.

The frontier model race has accelerated dramatically in early 2026. OpenAI released GPT-5.4 earlier this month, Google launched Gemini 3.1 Ultra, and xAI shipped Grok 4.20. A new Anthropic tier above Opus signals that the competition for the most capable AI model is far from settled.

Implications: What Claude Mythos Means for AI Users

A New Ceiling for AI Productivity

If Mythos performs as the leaked documents suggest, it could meaningfully raise the bar for what AI can do in professional workflows. Higher coding scores mean better code generation and debugging. Stronger reasoning means more reliable analysis and decision support. For knowledge workers who already rely on Claude for writing, research, and complex tasks, a step change in model capability translates directly to more capable AI assistance.

The Cybersecurity Double Edge

The most attention-grabbing detail from the leak is Anthropic's own assessment that Mythos is "currently far ahead of any other AI model in cyber capabilities." The leaked drafts warn it could exploit software vulnerabilities in ways that outpace defenders—a candid acknowledgment from the company building the model that its power cuts both ways.

Security concern: Anthropic's internal documents describe Mythos as presaging a wave of models that can exploit vulnerabilities faster than they can be patched. This is why the company is pursuing a deliberately slow, security-focused rollout with early-access customers first.

Market and Industry Reactions

The leak sent ripples through the AI industry. According to CoinDesk, software stocks moved on the news as investors weighed the implications of a significantly more capable AI model. Cybersecurity stocks were particularly affected, with Evercore analysts noting the dual nature of the model's capabilities—it could strengthen defenses but also amplify threats.

The Pricing Question

Anthropic acknowledged that Mythos is "very expensive for us to serve, and will be very expensive for our customers to use." This suggests Capybara-tier pricing could be significantly higher than current Opus pricing. The company is working to make the model more efficient before any general release—a challenge that mirrors OpenAI's recent struggles with Sora's unsustainable inference costs.

What To Do Now

While Claude Mythos is not yet publicly available, there are practical steps you can take to prepare for the next generation of AI capabilities:

  • Build your AI workflow now. If you're not already using Claude for writing, research, and knowledge work, start now. Users who have established workflows will benefit most when a more capable model becomes available.
  • Review your cybersecurity posture. The leaked documents suggest AI-powered cyber threats are accelerating. Audit your security practices, update software, and consider AI-powered security tools.
  • Watch for the rollout announcement. Anthropic is starting with early-access customers and expanding gradually. If you depend on Claude for critical work, consider reaching out to Anthropic about early access.
  • Explore multi-model workflows. Tools like Elephas let you use multiple AI models from your Mac—so you can switch between providers as new capabilities emerge without changing your workflow.

Frequently Asked Questions

What is Claude Mythos?

Claude Mythos is Anthropic's unreleased AI model that sits above the current Opus tier in a new tier called Capybara. Leaked documents describe it as a "step change" in capabilities, with dramatically higher scores in coding, academic reasoning, and cybersecurity compared to Claude Opus 4.6.

How was the model leaked?

A configuration error in Anthropic's content management system made close to 3,000 unpublished assets publicly accessible. Security researchers Alexandre Pauwels (University of Cambridge) and Roy Paz (LayerX Security) discovered the exposed data and reported it to Fortune.

When will Claude Mythos be available to the public?

No public release date has been announced. Anthropic says the model is currently being trialed by early-access customers with a deliberately slow, security-focused rollout. The company also noted it is "very expensive to serve" and is working on efficiency improvements before broader availability.

What are the cybersecurity concerns?

Anthropic's own leaked documents describe Mythos as "currently far ahead of any other AI model in cyber capabilities," warning it could find and exploit software vulnerabilities faster than defenders can patch them. This dual-use concern is driving the cautious rollout strategy.

How does this affect existing Claude users?

Current Claude models (Opus 4.6, Sonnet, Haiku) remain available and unaffected. When Mythos eventually launches, it will likely be offered as a premium option above Opus for users who need the highest level of AI capability. Tools built on Claude, like Elephas, would benefit from stronger underlying model performance.

The Bottom Line

Anthropic's accidental leak of Claude Mythos reveals that the AI capabilities race is far from plateauing. A new model tier above Opus—with meaningfully better performance across coding, reasoning, and cybersecurity—could reshape what AI productivity tools can do for knowledge workers.

The irony is hard to miss: the company building the model flagged its own creation as a potential cybersecurity risk, then accidentally proved the point by leaking it through a security misconfiguration.

What to watch next: Anthropic's rollout timeline for Mythos, pricing for the Capybara tier, and whether competitors respond with their own model upgrades. The frontier AI race just got another gear.

Ayush Chaturvedi
Written by

Ayush Chaturvedi

AI & Mac Productivity Expert

Ayush Chaturvedi is the co-founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Anthropic Sues US Government Over Supply Chain Risk Label

Anthropic filed two lawsuits against the US government after the Pentagon labeled it a supply chain risk for refusing to remove AI safety restrictions from military contracts. Full timeline, legal claims, and industry impact.

18 min read
news

Anthropic Sues Pentagon, 1.5 Million Quit ChatGPT: The AI Trust Crisis Reshaping the Industry

Anthropic filed two lawsuits against the Pentagon over its supply chain risk designation. Meanwhile, 1.5 million users quit ChatGPT in the QuitGPT boycott. Here's what it means for AI privacy.

14 min read
news

Anthropic Refuses Pentagon Demands, Gets Blacklisted — Then Claude Becomes the #1 App

Anthropic refused to grant the Pentagon unrestricted AI access, was designated a supply chain risk, and Claude surged to #1 on the App Store as ChatGPT uninstalls spiked 295%.

12 min read
news

Anthropic RSP v3.0: The Biggest Change to AI Safety Policy in Two Years

Anthropic released RSP v3.0 on February 24, 2026. Three core changes: separating company from industry commitments, a Frontier Safety Roadmap with public goals, and Risk Reports with independent external reviewers.

9 min read

Sources