The world of AI research just hit a major turning point. For the first time, a leading researcher at a top American AI company walked away from his job because of how the company talked about China. This wasn't about money or better opportunities elsewhere. It was about words and values.
This story goes much deeper than one person leaving one company. It opens up questions that affect the entire AI industry. How should tech companies handle global politics? What happens when business decisions clash with personal identity? And most importantly, are we heading toward a divided AI world where researchers have to pick sides?
We'll look at what actually happened, why it matters, how different AI companies are taking completely different paths, and what experts think about where this is all heading. The answers might surprise you, and the stakes are bigger than you think.
Let's get into it.
Executive Summary
A top AI researcher left Anthropic in September 2025 after the company publicly labeled China an "adversarial nation." Yao Shunyu, who worked on Claude development, cited this as 40% of his reason for departing to Google DeepMind.
Key developments:
- Anthropic banned all companies over 50% Chinese-owned worldwide, going beyond other AI companies
- CEO Dario Amodei actively lobbies for stricter export controls and calls to "defeat China in this technology"
- Other AI companies take varied approaches: OpenAI blocks by location, DeepMind advocates cooperation, Microsoft plays both sides, Meta releases open-source models
Broader implications:
- US government drives restrictions through export controls on AI chips and models
- 38% of top US AI researchers were born in China, creating talent concerns
- China responds by building cheaper, efficient models like DeepSeek using restricted hardware
- Experts debate whether restrictions help or accelerate China's independence
This incident marks the first documented case of a researcher leaving a major US AI company specifically over Anti China stance policy.
A Top Researcher Walks Away

Image Credit: South China Morning Post
Yao Shunyu was a rising star in AI research who made a surprising move in September 2025. After working at Anthropic for less than a year, he decided to leave the company. His departure caught attention because he openly shared why he was walking away.
Yao had an impressive background. He studied physics at Tsinghua University, one of China's top schools, and later got his PhD from Stanford University. He joined Anthropic in October 2024 and worked on developing their Claude AI models.
Key details about his departure:
- Last day at Anthropic: September 19, 2025
- New position: Senior Staff Research Scientist at Google DeepMind
- Started new role: September 29, 2025
- Main reason for leaving: About 40% was his disagreement with how Anthropic talked about China
The breaking point came when Anthropic publicly called China an "adversarial nation" in their company policy. This label bothered Yao deeply. Even though he believed most people working at Anthropic disagreed with this language, he felt he couldn't stay anymore. He wrote on his website that while his time there was good, leaving was the better choice.
While 40% of the reason is because of the Anti-China stance, the other 60% is because of internal confidentiality reasons that he cannot speak of.
1. ~40% of the reason: I strongly disagree with the anti-china statements Anthropic has made. Especially from the recent public announcement, where China has been called “adversarial nation”. Although to be clear, I believe most of the people at anthropic will disagree with such a statement, yet, I don’t think there is a way for me to stay.
2. The remaining 60% is more complicated. Most of them contains internal anthropic informations thus I can’t tell.
What Made Him Leave

On September 5, 2025, Anthropic released a policy update that changed everything for Yao. The company announced new rules about who could use their AI services. But it wasn't just the rules that created problems. It was the words they used.
Anthropic directly called China an "adversarial nation" in their official policy document. They also described China as an "authoritarian region" whose companies could be forced to share data with intelligence services. The company warned that Chinese entities might use their AI to help military operations.
How the ban actually worked:
- Blocked any company more than 50% owned by Chinese entities
- Applied worldwide, not just in China
- Affected major companies like ByteDance, Alibaba, and Tencent
- Even their overseas operations lost access
This approach went much further than other AI companies. Most others simply blocked people based on where they lived. Anthropic blocked based on who owned the company, no matter where they operated.
For Yao, the "adversarial nation" label felt personal. He mentioned in his blog post that most Anthropic employees probably disagreed with this language. But the company had officially taken this stance, and he couldn't separate himself from it while staying there.
How Far Does Anthropic Go With Its China Position

Anthropic's CEO, Dario Amodei, has been very vocal about viewing China as a serious threat. He has given multiple interviews and written articles explaining why he thinks keeping advanced AI away from China matters for global security. In one interview, he said it was important to "defeat China in this technology" because it could control the future of freedom and democracy.
Amodei believes that if China gets access to the same level of AI technology, they would focus more resources on military uses. He worries this could give China a commanding lead not just in AI, but in global power overall. He has also warned that Chinese spies are likely trying to steal AI secrets from American companies.
In his January 2025 blog post "On DeepSeek and Export Controls," he wrote: "Export controls serve a vital purpose: keeping democratic nations at the forefront of AI development." He argued that "well-enforced export controls are the only thing that can prevent China from getting millions of chips, and are therefore the most important determinant of whether we end up in a unipolar or bipolar world."
In a February 5, 2025 interview with ChinaTalk, Amodei stated: "Whatever the dangers of the technology, whatever guardrails are needed, I think it's also very important that we defeat China in this technology. This could control the fate of nations. This could control the future of freedom and democracy." He later told an Axios AI+ DC Summit in September 2025: "It is mortgaging our future as a country to sell these chips to China."
What makes Anthropic different:
- Most vocal among all AI company leaders about China
- Actively lobbied the government for tougher export rules
- Published detailed policy papers recommending stricter chip controls
- Willing to sacrifice major revenue to maintain their stance
Amodei on DeepSeek specifically, evaluated their R1 model and found it "did the worst of basically any model we'd ever tested" on national security evaluations, noting it had "absolutely no blocks whatsoever" against generating bioweapon information.
This position came with a real cost. Anthropic admitted their China ban would lose them hundreds of millions of dollars in revenue. But they proceeded anyway, saying it was necessary to protect democratic values and prevent AI from serving authoritarian goals.
What Other AI Companies Are Doing
Different AI companies have taken very different approaches to dealing with China. Understanding these differences helps explain why Yao chose to move to Google DeepMind instead of staying at Anthropic or going elsewhere.
OpenAI: Complete Geographic Block

OpenAI started blocking China in June 2024. They stopped all API access for developers in mainland China and Hong Kong by July 9, 2024. Before this, Chinese users could access ChatGPT through VPNs and workarounds, but OpenAI moved to actively block these methods too.
OpenAI's blocking method:
- Based purely on location, not company ownership
- Blocked 188+ countries including China, Hong Kong, and Russia
- Cited national security concerns
- Used less inflammatory language than Anthropic
The key difference is that OpenAI never used terms like "adversarial nation" in their announcements. They restricted access but kept their public statements more neutral about why they were doing it.
Google DeepMind: The Cooperative Approach

Google DeepMind takes a notably softer stance. Their CEO, Demis Hassabis, has publicly said the US and China should work together on AI safety issues. This creates a sharp contrast with Anthropic's uncompromising position.
Google itself left China back in 2010 after disputes over censorship and hacking. DeepMind has tightened how it publishes research because of China concerns, but their overall message focuses more on cooperation than confrontation.
DeepMind's position:
- Advocates for US-China cooperation on AI safety
- More moderate public rhetoric
- Hassabis believes in "talking to everyone including China"
- Still concerned about security but less aggressive
When lawmakers asked DeepMind what stops their researchers from leaving for China, they answered honestly: "Candidly, nothing." This admission shows they understand that hostile rhetoric could push talent away.
Microsoft: Playing Both Sides

Microsoft has the most contradictory position among all major AI companies. In 2024, they offered 700-800 employees working in China the option to relocate out of the country. If these employees wanted to keep their jobs, they had to move.
But here's where it gets confusing. Even though OpenAI blocked China completely, OpenAI's models are still available through Microsoft Azure China. Azure operates in China through a partnership with a local company called 21Vianet.
The Microsoft contradiction:
- Pressured China-based employees to leave
- But still offers OpenAI models through Azure China
- Chinese customers can access ChatGPT by using Azure
- Creates a major loophole in the supposed ban
This setup shows Microsoft trying to balance two competing interests. They respond to US government pressure by reducing their China presence, but they also protect their large research center in China and keep earning Chinese revenue.
Meta: The Open Door

Meta takes the most permissive approach through its open-source Llama models. Anyone can download and use these models, including people in China. While Meta's terms of service prohibit military use, there's no way to enforce this once a model is publicly released.
This approach has caused major controversy. In June 2024, Chinese military researchers used Meta's Llama model to build an AI tool for the People's Liberation Army. Meta said this violated their rules, but they admitted they can't actually stop it.
Why Meta's approach is controversial:
- Releases powerful AI models for anyone to download
- Can't enforce restrictions once models are public
- CEO Mark Zuckerberg argues open-source helps America stay ahead
- Whistleblower accused Meta of working too closely with Chinese officials
A former Meta employee testified to Congress in 2025, claiming the company worked closely with Chinese officials, created censorship tools for China, and even considered building data centers there. These allegations put Meta under intense scrutiny.
Zuckerberg's argument is that closed models are easier to steal, so making them open-source actually helps the US maintain its lead. This philosophy directly opposes Anthropic's view that tight control is necessary.
Why These Differences Matter
These varied approaches create very different environments for researchers. Yao left Anthropic for DeepMind specifically because DeepMind's leadership talks about cooperation rather than confrontation. A researcher's choice of where to work now depends partly on how comfortable they feel with their employer's political stance.
The US Government's Heavy Hand

While AI companies get attention for their China policies, the US government is actually the main force driving these restrictions. Most of what companies do comes from following laws, not just their own choices.
In October 2022, the government introduced powerful export controls on AI chips. These rules don't target specific bad companies. Instead, they block entire categories of advanced chips from going to China based on how powerful they are. Commerce Secretary Gina Raimondo made the goal clear: stop China from catching up to America in AI.
How the rules work:
- Block advanced AI chips from being sold to China
- Require special licenses that usually get denied
- Apply to any company trying to send chips there
- Create legal penalties for violations
The restrictions keep getting tougher. In October 2023, the government closed loopholes when Nvidia tried making weaker chips for China. By January 2025, they went even further by controlling AI models themselves, not just the chips. This was the first time the government treated trained AI models as controlled items.
Both Democrats and Republicans support these policies. The Biden administration created most rules, and the Trump administration continued adding more. No major political group wants to relax these restrictions. If anything, lawmakers debate whether to make them even stricter.
But there's a big problem. The Bureau of Industry and Security has fewer than 600 employees to oversee trillions of dollars in global trade. Chinese companies create fake shell companies faster than the government can track them.
How China is Responding and Moving Forward

China isn't sitting quietly in response to these restrictions. Chinese AI companies are actively working to overcome limitations and prove they can compete without American technology.
When Anthropic banned Chinese users, local companies immediately jumped in to fill the gap. Z.ai launched a "house moving" program to help Claude users switch to their service, offering free tokens and technical support. Moonshot AI released Kimi K2, claiming it matched Claude Sonnet 4's coding abilities.
China's strategy to lead AI research:
- Focusing on making AI models cheaper and more efficient instead of just using more computing power
- Releasing open-source models that anyone can use and improve
- Spending over $150 billion since 2014 to build their own chip industry
- Finding creative ways to work around hardware limitations
DeepSeek became the perfect example of this approach. In January 2025, they shocked everyone by building powerful AI models using restricted, less advanced chips. They proved that smart engineering and better algorithms could replace raw computing power. This was exactly what American companies didn't expect.
China has also started talking about bigger goals. Alibaba's CEO recently announced plans for artificial superintelligence, showing Chinese companies aren't just trying to catch up anymore. They want to lead.
The restrictions might have actually pushed China to innovate faster. Instead of depending on American chips and technology, they're building their own path forward.
Why Some Experts Think This Strategy Has Problems
Not everyone agrees that blocking China from AI technology is a smart move. Several experts worry these policies might backfire and actually hurt America more than help.
The biggest concern is losing talent. A 2022 study found that 38% of top AI researchers working in America were born in China. That's more than researchers born in the US itself. When companies use hostile language about China, these researchers might leave for other countries or return home.
Key concerns experts raise:
- Restrictions might push China to build its own technology faster
- American chip companies earn about 30% of revenue from China
- Over 90% of Chinese PhD students stay in America after graduation
- Cooperation on AI safety might be more effective than confrontation
Some economists warn that cutting China off could speed up their independence. China has already spent over $150 billion building its own chip industry. DeepSeek's success in January 2025 shocked people because they built strong AI with restricted hardware, showing China was adapting faster than expected.
There's also debate about whether working together on AI safety issues would serve everyone better than treating China as an enemy.
How Reddit Users Reacted

The news sparked strong reactions on Reddit's AI community. Users had mixed feelings, with many criticizing Anthropic's approach.
Several users pointed out the irony in Amodei's stance. User foldl-li asked what happened to Amodei after he worked at Baidu, a Chinese company. MindlessScrambler responded by explaining Baidu's poor reputation and questioned what Amodei saw there that changed his views.

Common criticisms included:
- Weary-Willow5126 called Amodei's blog post about DeepSeek "the most ridiculous thing" they ever read, saying only people outside the US could understand how insane his narrative was
- spritehead said Anthropic trying to be a "moral arbiter while shilling to the US surveillance state is so heinous and laughable"
- Recoil42 reminded people that Anthropic is a CIA and NSA contractor partnered with Palantir


Some users like nderstand2grow noted that Anthropic has zero open weight models while claiming to be about AI safety. Zone_Purifier sarcastically added that trusting people with open models is apparently "immoral and dangerous."

User Iory1998 pointed out the contradiction that most Chinese AI is open-source while Anthropic's models are completely closed with no way to verify their claims.

However, not everyone agreed. HolidayPsycho defended the stance, saying China is factually an adversarial nation, though this doesn't mean all Chinese people are enemies.
