OpenAI is fighting legal battles on two different fronts right now. The company behind ChatGPT faces serious challenges that could change how AI companies operate in the future.

In the United States, OpenAI is locked in a privacy dispute with The New York Times. The newspaper wants access to millions of private ChatGPT conversations. They claim these chats might show people getting around their paywall. OpenAI refuses to hand over this data, saying it would violate user privacy.

Across the Atlantic in Germany, OpenAI lost a copyright case. A Munich court ruled that ChatGPT broke the law by training on protected song lyrics without permission. The company must now pay damages to music creators.

Let's break down what happened in each case and see where AI, privacy, and copyright might be heading.

Executive Summary

OpenAI faces two major legal battles that highlight critical challenges for the AI industry.

The New York Times Privacy Case

The newspaper demands access to 20 million private ChatGPT conversations, claiming they might show users bypassing their paywall. NYT originally requested 1.4 billion conversations before scaling down to this smaller sample covering December 2022 to November 2024. OpenAI calls this a privacy invasion affecting millions of innocent users who have no connection to the paywall issue.

The company offered alternative solutions like targeted searches that would protect user privacy while giving NYT relevant data. However, the newspaper rejected all these options and insisted on full access to complete conversations. An earlier court order in June 2025 forced OpenAI to keep all user data indefinitely, breaking their 30-day deletion promise. This order ended in September after successful appeals.

A Munich court ruled that ChatGPT violated copyright law by training on protected German song lyrics without permission. GEMA, Germany's music rights society representing 100,000 members, brought the case involving nine popular songs including major hits from Herbert Grönemeyer and Helene Fischer. OpenAI argued their AI learns from entire datasets without copying specific songs and that users should be responsible for outputs. The court rejected both defenses and ordered OpenAI to pay undisclosed damages.

The Privacy Fight with The New York Times

OpenAI vs New York Times

The New York Times filed a lawsuit against OpenAI that goes beyond typical copyright disputes. This case centers on user privacy and data access. The newspaper wants OpenAI to hand over millions of private ChatGPT conversations from regular users. Their reasoning is straightforward but controversial. They believe some of these conversations might contain evidence of people using ChatGPT to get around the NYT paywall.

The paywall issue matters to the newspaper because it protects their subscription revenue. When readers hit the paywall, they need to pay to read articles. The NYT suspects that some users might be copying their article text and asking ChatGPT to summarize or rewrite it. This would let them access the content without paying for a subscription.

OpenAI calls this lawsuit baseless. They argue that the newspaper is using the legal system to invade user privacy without solid proof of wrongdoing. The company points out that most ChatGPT users have no connection to the NYT paywall issue. Yet their private conversations would be exposed if the court grants this request.

The Staggering Amount of Data Requested

OpenAI vs New York Times

The scope of data requests in this case grew to massive proportions. The New York Times did not start with a small, targeted request. Instead, they cast an extremely wide net that would capture millions of innocent users.

Timeline of data requests:

  • Initial demand: 1.4 billion private ChatGPT conversations
  • OpenAI's response: Pushed back hard against this sweeping request
  • Revised demand: 20 million conversations randomly selected
  • Time period covered: December 2022 to November 2024

These 20 million chats were not chosen because they showed suspicious activity. OpenAI was ordered to provide a random sample from nearly two years of user conversations. This means the vast majority of conversations in this sample have nothing to do with the NYT paywall. They contain personal discussions, work projects, creative writing, homework help, and countless other private uses.

The random sampling approach raises serious concerns. It treats user privacy as something that can be violated in bulk, without specific evidence linking those users to any wrongdoing. OpenAI argues this goes against basic privacy principles that have protected people for decades.

OpenAI also implemented technical safeguards. The 20 million conversations are stored separately from normal systems. They sit in a secure environment under legal hold. This means the data cannot be accessed or used for any purpose except meeting legal obligations. Only a small team of audited legal and security staff can view this information when absolutely necessary.

The company emphasizes that this data remains protected even under court order. It has not been shared with the New York Times, the court, or any third party.

How OpenAI Is Fighting to Protect User Privacy

ChatGPT

OpenAI took multiple steps to defend user privacy against these demands. The company treats this as a fundamental responsibility, not just a legal obligation. With 800 million people using ChatGPT every week, the stakes are high.

Privacy protection measures OpenAI implemented:

  • Filing legal motions to challenge the data requests in court
  • Running de-identification procedures on affected conversations
  • Removing personal information like names, addresses, and passwords
  • Proposing targeted search alternatives to the NYT
  • Keeping the data in a separate, secure system under legal hold
  • Restricting access to a small, audited security team

The company offered several privacy-preserving options to the newspaper. One proposal involved running targeted searches on the conversation sample. This would let the NYT find chats that might include text from their articles without seeing unrelated conversations. Another option provided high-level data about how ChatGPT was used, without exposing actual conversation content.

The New York Times rejected all these alternatives. They insisted on full access to the complete conversations. This decision reinforced OpenAI's view that the newspaper prioritizes its lawsuit strategy over user privacy.

The Earlier Court Order That Threatened Data Deletion

Court order against ChatGPT

Before the 20 million conversation demand, OpenAI faced another privacy challenge in June 2025. The New York Times asked the court to force OpenAI to keep all user data forever. This meant ChatGPT conversations and API data would never be deleted, no matter what users wanted. The court initially granted this preservation order.

This decision directly conflicted with OpenAI's core privacy promise. The company tells users that deleted chats get removed immediately and are permanently deleted from their systems within 30 days.

Impact of the preservation order:

  • Required indefinite retention of all new conversations
  • Applied to ChatGPT Free, Plus, Pro, and Team users
  • Prevented automatic deletion after 30 days
  • Excluded ChatGPT Enterprise and Edu customers

Users lost their right to permanently remove conversations. OpenAI filed motions and appeals, arguing that indefinite data retention violates industry standards and their own policies.

By September 26, 2025, the preservation order ended. OpenAI successfully restored normal data deletion practices. Users could again delete their chats with confidence they would be permanently removed after 30 days.

However, the NYT still demands OpenAI keep specific user data from April to September 2025 in secure storage while the legal battle continues. But OpenAI no longer has to keep all new conversations forever.

Future Privacy Features OpenAI Plans to Build

OpenAI learned important lessons from these legal battles. The company realized that privacy protections need to be stronger as AI becomes more important in people's lives. They announced plans to accelerate their security roadmap with features that put users in complete control.

The centerpiece of this plan is client-side encryption for ChatGPT messages. With this technology, messages get scrambled on your device before reaching OpenAI servers. Only you have the key to unscramble them.

What this means for users:

  • Your messages get encrypted on your phone or computer
  • OpenAI stores encrypted data they cannot read
  • Even court orders would only get encrypted files
  • Only you can unlock and read your messages

This would make the NYT situation impossible. Even if courts demand conversations, OpenAI would only have unreadable encrypted data.

OpenAI also plans fully automated safety systems to detect serious threats without human involvement. Only critical issues like life threats would reach a small, vetted human review team. Regular conversations would never be seen by anyone.

While OpenAI battles privacy issues in the United States, they face an entirely different legal problem in Europe. This case focuses on copyright law and how AI companies use creative work to train their models. The German ruling came just before the latest privacy fight with the New York Times, adding more legal pressure on OpenAI.

What Happened in the Munich Court

Germany Munich Court

In November 2024, a Munich regional court delivered a blow to OpenAI. The court ruled that ChatGPT violated German copyright laws. The violation was clear and specific. ChatGPT used protected song lyrics from popular German artists to train its language models without getting permission or paying for the rights.

This ruling marks a turning point for AI companies operating in Europe. German courts took a firm stance that copyright laws apply to AI training data. The decision sent a message that AI companies cannot simply harvest creative content from the internet without respecting the rights of creators.

The court ordered OpenAI to pay damages to the affected rights holders. The exact amount remains undisclosed due to legal agreements. However, the financial penalty is less important than the legal precedent this case establishes across Europe.

Who Filed the Lawsuit Against OpenAI

Germany Gema

GEMA brought this case to court. GEMA is Germany's music rights society that manages copyrights for composers, lyricists, and music publishers. The organization represents approximately 100,000 members across the German music industry.

GEMA filed the lawsuit in November 2024 after discovering that ChatGPT had harvested protected lyrics. They argued that ChatGPT used these lyrics to learn language patterns and improve its responses. This training happened without permission from the rights holders and without any compensation.

Key points about GEMA's role:

  • Protects the financial interests of music creators
  • Collects royalties when copyrighted music gets used
  • Enforces copyright law on behalf of its members
  • Ensures creators get paid for their work

The organization saw this case as a test for how European law would treat AI training practices. GEMA wanted to establish that the internet is not a free resource for AI companies. Creative work has value and creators deserve compensation when their work gets used.

The Nine Songs at the Center of the Case

The lawsuit focused on nine of the most recognizable German hits from recent decades. These were not obscure songs that few people knew. They were massive hits that defined German pop culture for years.

Two songs stood out in the court documents. Herbert Grönemeyer's 1984 synth-pop song "Männer" (Men) became one of the most iconic German songs of the 1980s. The other was Helene Fischer's "Atemlos Durch die Nacht" (Breathless Through the Night). This song became so popular that it served as an unofficial anthem for the German national team during the 2014 football World Cup.

ChatGPT used these songs and seven others to improve its language capabilities. The AI learned from the lyrics, studying how words connect and how language creates meaning and emotion. However, this learning process happened without any legal agreement with the song creators or rights holders.

Why these songs mattered to the case:

  • They represent significant creative achievements
  • The creators invested time and talent to write them
  • They generate ongoing income for rights holders
  • Their popularity made them valuable training data
  • Using them without permission deprived creators of compensation
OpenAI

OpenAI mounted a two-part defense in the German court. Both arguments failed to convince the judges.

First, OpenAI argued that their language learning models absorb entire training sets of data. They claimed they do not store or copy specific songs. Instead, the AI learns patterns from massive amounts of text. Individual songs become part of a larger dataset that teaches the AI how language works.

The court rejected this argument. The judges determined that using copyrighted material for training still counts as use. It does not matter whether the AI stores complete copies of the songs. The fact that ChatGPT learned from protected lyrics without permission violated copyright law.

Second, OpenAI tried to shift legal responsibility to users. They argued that ChatGPT output gets generated by users through their prompts. Since users create the prompts that lead to responses, OpenAI claimed users should be held legally liable for any copyright issues.

This argument also failed. The court found that OpenAI trained their model using copyrighted material. This training happened before any user typed a prompt. The company made the choice to use protected lyrics in their training data. Users had no control over or knowledge of what data trained the AI. Therefore, OpenAI bears the legal responsibility.

The Court's Ruling and Its Broader Impact

OpenAI Privacy court case

The presiding judge ordered OpenAI to pay undisclosed damages for using copyrighted material without permission. This financial penalty punishes past violations and serves as a warning about future conduct.

GEMA celebrated the decision as "the first landmark AI ruling in Europe." The organization's chief executive, Tobias Holzmüller, stated that the ruling proved "the internet is not a self-service store and human creative achievements are not free templates."

Significance of this ruling:

  • Establishes legal precedent across Europe
  • Protects creators in music and potentially other fields
  • Requires AI companies to respect copyright law
  • Creates legal certainty for publishers and platforms
  • Sends a clear message to the global tech industry

The German Journalists' Association called it "a milestone victory for copyright law." Legal experts predict this decision will influence courts in other European countries. The ruling creates a framework that other creative industries can use to protect their work from unauthorized AI training.

GEMA's legal adviser, Kai Welp, said the organization now hopes to negotiate with OpenAI. They want to establish how rights holders can be compensated when AI companies use their work. This could lead to licensing agreements that let AI companies legally train on copyrighted material while paying creators fairly.

OpenAI's Response to Both Cases

OpenAI issued separate statements addressing each legal battle. Their responses show different strategies for handling privacy concerns versus copyright disputes.

How OpenAI Responded to the Privacy Battle

Newyorktimes vs ChatGPT

OpenAI made strong commitments about protecting user data throughout the New York Times lawsuit. The company stated that trust, security, and privacy guide every product decision they make. They emphasized that 800 million people use ChatGPT weekly and trust them with sensitive information.

The company is considering all available legal options to protect user privacy. They continue to appeal court orders and challenge data requests at every stage. Beyond legal defense, OpenAI accelerated their security roadmap. They announced plans to build client-side encryption and stronger automated safety systems. These features aim to keep conversations private even from OpenAI itself.

OpenAI ChatGPT

OpenAI took a different tone with the German case. They disagreed with the Munich court's ruling and stated they are considering an appeal. The company emphasized that this decision only affects a limited set of lyrics. They pointed out that millions of people, businesses, and developers in Germany continue using ChatGPT every day without disruption.

Key points from OpenAI's statement:

  • They respect the rights of creators and content owners
  • They are having productive conversations with organizations worldwide
  • They want creators to benefit from AI opportunities
  • The ruling does not impact daily ChatGPT operations in Germany

OpenAI maintains they are working to find solutions that benefit both creators and AI development.

OpenAI privacy cases

These two cases represent critical challenges facing the AI industry. They are not just about OpenAI. They show the growing pains that come when powerful new technology meets existing laws and social expectations.

Privacy and copyright have become central issues in AI development. Companies need massive amounts of data to train their models. This data often includes private conversations, creative works, and copyrighted material. The question is how AI companies can access this data legally and ethically.

AI companies must now balance innovation with legal responsibility. They cannot simply move fast and ignore rules. Different countries are taking different approaches to regulating AI. Germany focuses heavily on copyright protection. The United States grapples with privacy standards. This creates a complex legal landscape for companies operating globally.

These cases set important precedents:

  • How much privacy protection AI users deserve
  • Whether AI training counts as copyright infringement
  • What data companies can collect and retain
  • How courts will handle AI-related disputes

OpenAI's Partnerships with Other News Organizations

While fighting the New York Times in court, OpenAI maintains partnerships with many other news organizations. This creates an interesting contrast. They work with Axios, The Atlantic, Associated Press, Financial Times, Reuters, and nearly 20 media organizations total.

These partnerships bring OpenAI's tech to more than 160 news outlets in over 20 languages. Publishers use AI tools in their newsrooms to streamline work and reach audiences. ChatGPT displays their content with clear citations and direct links to original sources. OpenAI also provides grants and API credits to help smaller newsrooms adopt AI technology.