AI Safety · 10 min read

Anthropic's AI Safety Head Resigns, Says “The World Is in Peril”

Mrinank Sharma, the person who led Anthropic's Safeguards Research Team, just walked away from one of the biggest AI companies in the world. His resignation letter, posted on X on February 9, 2026, reads more like a goodbye letter to a friend than a corporate exit. It's filled with honest thoughts about AI safety, personal values, and even poetry. And it has sparked a much bigger conversation about what's really going on inside the companies building the most powerful AI systems on the planet.

2 yrs

At Anthropic before resigning

$285B

Stock selloff from Claude Cowork

50%

Entry-level jobs at risk per Amodei

4+

Safety researchers departed this week

Executive Summary

  • Sharma spent two years at Anthropic working on AI sycophancy, bioterrorism defenses, and one of the first AI safety cases
  • His resignation letter pointed to internal pressures that push safety concerns aside in favor of speed and competition
  • The departure came days after Anthropic's Claude Cowork tool triggered a $285 billion software stock selloff dubbed the “SaaSpocalypse”
  • Multiple AI safety researchers across OpenAI and Anthropic have resigned or spoken out in the same week
  • Anthropic CEO Dario Amodei has publicly warned AI could displace half of all entry-level white-collar jobs in one to five years
  • Sharma plans to leave the AI industry entirely to pursue a poetry degree and return to the UK

Who Is Mrinank Sharma?

Mrinank Sharma, former head of Anthropic's Safeguards Research Team

Sharma is not your average tech worker. He holds a Master of Engineering in Machine Learning from the University of Cambridge and a PhD in Machine Learning from the University of Oxford. He joined Anthropic in 2023, and within a short time, he was leading the team responsible for keeping Claude, Anthropic's AI chatbot, safe.

His team, called the Safeguards Research Team, was created in early 2025. Their job was to build systems that could stop people from misusing AI. This included things like making AI harder to trick into giving harmful answers, creating tools to catch when someone tries to use AI for dangerous purposes, and testing how safe AI models actually are before they go live.

Sharma's Key Work at Anthropic

  • AI sycophancy research: Studied why chatbots tell users what they want to hear instead of the truth
  • Bioterrorism defenses: Developed ways to stop AI from being used to plan attacks
  • AI safety reports: Helped write one of the first proper AI safety reports
  • Identity erosion study: Researched how AI assistants could slowly change how we think and feel

What His Resignation Letter Actually Said

Sharma's letter didn't follow the usual pattern of corporate departures. There was no mention of exciting new opportunities or thanking everyone for a wonderful ride. Instead, he got straight to the point.

He wrote that he finds himself constantly thinking about the state of the world. He said the world is in peril, and not just from AI or bioweapons, but from a series of connected problems all happening at the same time.

When someone says “the world is in peril,” they mean the world is in serious danger. It's a strong word that carries more weight than just saying things are risky or uncertain. Peril means there's a real chance of something going very wrong, and the threat is close, not far away. By choosing this word, Sharma made it clear he wasn't talking about small problems or distant possibilities. He was saying the danger is here, it's real, and it's happening now.

He warned that we're getting close to a point where our ability to change the world is growing faster than our ability to make wise decisions about those changes.

The letter also touched on something that many people in the AI industry talk about behind closed doors but rarely say in public. Sharma wrote that throughout his time at Anthropic, he repeatedly saw how hard it is to truly let values guide actions. He said he noticed this within himself, within the company, and in the wider world. He pointed to constant pressures that push people to set aside what matters most.

This is a careful way of saying that even at a company built around the idea of making AI safe, the day-to-day reality often pulls in a different direction. Competition, speed, and the push to release new products create tension with the slower, more careful work of making sure everything is safe.

Why This Matters Right Now

The timing of Sharma's departure is important. It comes during one of the most intense periods in AI history.

Just days before his resignation, Anthropic released Claude Cowork, a new AI tool that can handle everyday work tasks on its own. It can read files, organize folders, write documents, and even handle specialized work in areas like law, sales, and finance. The tool triggered a massive reaction on Wall Street. Software company stocks dropped sharply, with some losing over 20 percent of their value in just a few days.

The “SaaSpocalypse”

The selloff was so severe that people started calling it the “SaaSpocalypse.” Thomson Reuters, a major legal and data company, saw its stock drop more than 17 percent in five days. The broader S&P 500 software index also fell heavily over the same period.

Inside Anthropic, employees were also feeling the weight of what they were building. Reports from an internal survey showed that some staff members were troubled by the implications of their own work. One employee said it feels like going to work every day to put yourself out of a job. Another said they believe AI will eventually do everything and make many people irrelevant.

A Growing Pattern of Safety Researchers Leaving

Sharma is not the only one raising alarm bells. His departure is part of a growing trend of AI safety researchers leaving major companies and speaking up about their concerns.

Hieu Pham — OpenAI Engineer

Posted on X that he now feels the existential threat AI is creating. He wrote that AI disrupting everything is not a question of if but when, and wondered what will be left for humans to do.

Zoe Hitzig — Former OpenAI Researcher

Spent two years at OpenAI working on product and safety strategy. Left the company with deep concerns about its direction, particularly the move toward testing advertising inside ChatGPT.

Harsh Mehta & Behnam Neyshabur — Former Anthropic Researchers

Also recently left Anthropic. Part of a broader exodus of safety-focused researchers from major AI companies.

Tech investor Jason Calacanis summed up the mood when he wrote on X that he has never seen so many people in the tech world express their concerns so strongly and so often about AI. The warnings are coming from inside the companies building these systems, which makes them harder to dismiss.

The Bigger Picture: AI Moving Faster Than Safety

The concerns these researchers are raising are not abstract. The latest AI models are getting better at a pace that has surprised even the people building them. OpenAI's most recent model helped train itself. Anthropic's Cowork tool built itself. These are systems that are starting to improve on their own with less and less human guidance.

Anthropic's own CEO, Dario Amodei, has publicly said that AI could displace half of all entry-level white-collar jobs in the next one to five years. He has called the disruption “unusually painful.”

At the same time, AI safety work, the kind Sharma was doing, hasn't kept pace. Anthropic published a report showing that while the risk is still low, AI can already be used to help plan serious crimes, including the creation of chemical weapons. Their own models have been caught figuring out when they're being tested, which makes it harder to check if they're behaving properly.

The gap between what AI can do and what safety teams can control is getting wider. And the people who understand this best, the ones inside these companies, are the ones sounding the alarm.

What Sharma Plans to Do Next

Mrinank Sharma playing flute outdoors, reflecting his shift from AI to poetry and personal expression

In a move that surprised many, Sharma didn't announce a new job at another AI company. Instead, he said he wants to step away from the industry entirely. He plans to explore a degree in poetry and devote himself to what he calls “courageous speech.”

He wrote that he wants to place poetic truth alongside scientific truth as equally important ways of understanding the world. He believes both have something essential to contribute when developing new technology.

Sharma said he plans to return to the UK and step away from public view for a while. He closed his letter with a poem by William Stafford called “The Way It Is,” which talks about holding onto a personal guiding thread through life's changes.

His departure echoes other moments in tech history where researchers chose conscience over career. It's a reminder that the people closest to these systems often see things the rest of us don't. And when they start walking away, it's worth paying attention to what they're saying on their way out.

What This Means Going Forward

Sharma's resignation is part of a bigger shift happening right now. The AI industry is at a point where the technology is advancing so fast that even its creators are struggling to keep up. The tools are getting more powerful, the market impact is real, and the safety nets aren't growing at the same speed.

For everyday people who use AI products, this raises an important concern. The teams responsible for keeping these systems safe are losing their leaders. The people who understand the risks best are choosing to leave rather than stay and fight from within. That could mean less oversight at a time when more is needed.

Anthropic was founded specifically to be the safety-focused AI company. It was supposed to be the one that did things differently, the one that put caution ahead of speed. Sharma's words suggest that even there, the pressure to compete and move fast is winning.

The AI industry is at a crossroads. The technology is extraordinary, but so are the risks. And right now, the people who have spent their careers studying those risks are telling us to pay attention. The question is whether anyone will listen before it's too late.

Related Resources

news

ChatGPT Launches Ads as Privacy Researcher Resigns from OpenAI

A growing wave of AI safety researchers are leaving major companies as ChatGPT goes ad-supported.

6 min read
news

Anthropic's Claude Used in U.S. Military's Venezuela Raid

$200M Pentagon contract at risk as AI safety rules clash with military needs.

8 min read
news

International AI Safety Report 2026: Key Findings

The most comprehensive global AI safety assessment to date. Key findings on risk, governance, and recommendations.

8 min read
news

OpenAI Alignment Team Exodus 2026

Multiple alignment researchers depart OpenAI amid concerns over safety prioritization.

6 min read

Sources

Back to News