10 min read
LEGAL AI

Is ChatGPT Attorney-Client Privilege Protected? The Heppner Ruling and Generative AI in Litigation

A federal court just issued the first ruling of its kind: chats with ChatGPT and Claude are not protected by the attorney-client privilege. Here is what Heppner decided, why generative AI breaks traditional privilege, and how legal teams can keep using AI without waiving privilege.

PUBLIC AIYour promptCloudThird-party disclosureLogs are discoverablePrivilege waivedPRIVATE AIRuns on your MacNo third-party disclosureNothing leaves your deviceConfidentiality preserved

The short answer

No. Using ChatGPT, Claude, or any other public AI tool for legal advice, legal strategies, or any privileged legal work is not protected by attorney-client privilege. In United States v. Heppner (February 2026), the Southern District of New York held that a defendant's Claude chats were outside attorney-client privilege and outside work product protection.

Why generative AI breaks traditional privilege

AttorneyAI is not a lawyerConfidentialShared with third partyLegal adviceNot for that purpose

Attorney-client privilege only protects confidential communications made for the purpose of obtaining legal advice from a licensed attorney. Three elements have to line up, and public AI chatbots miss all three.

1

Lawyer's involvement

The communication must be with an attorney. A generative AI system is not a licensed attorney, so a chat with it is not a lawyer's communication.

2

Confidentiality

The communication must be kept confidential. Public AI platforms log user inputs, retain outputs, and reserve rights to disclose user data to third parties.

3

Legal-advice purpose

The communication must be made for the purpose of obtaining legal advice from counsel. A chatbot cannot provide legal advice, so most AI use fails this test.

Traditional privilege principles assume a one-to-one attorney-client relationship. The moment you add a third-party AI platform to that picture, you introduce disclosure, data retention, and an expectation-of-privacy problem that can defeat a privilege claim before a court even evaluates the content.

United States v. Heppner: the first court to decide if Claude chats are privileged

31 CLAUDE CHATSNOT PROTECTEDby attorney-client privilegeor work-product doctrine

On February 17, 2026, Judge Rakoff of the Southern District of New York issued the first nationwide ruling on whether generative AI chat logs qualify for attorney-client privilege or work product protection. Bradley Heppner had been charged with fraud after a grand jury subpoena brought him into the investigation. After his arrest, FBI agents seized roughly thirty-one documents of chat exchanges between Heppner and Claude.

Heppner used Claude in anticipation of litigation, without any direction of counsel, to work through legal strategies and rehearse arguments before meeting his lawyer. He later shared some outputs with counsel, and those outputs influenced the defense. He argued the files were protected. The court disagreed on all three elements of privilege.

“Because Claude is not an attorney, that alone disposes of Heppner's claim of privilege.” Judge Rakoff then found that the chats were not confidential and not made for the purpose of obtaining legal advice from counsel.

On the work product doctrine, the result was the same. The doctrine protects materials prepared in anticipation of litigation, but only when the work is prepared by counsel or at the direction of counsel. Since Heppner self-directed the use of Claude, the court held that the AI chats fell outside attorney-client privilege and outside work product protection.

The ruling is narrow on its facts but sweeping in its signal. For legal practice across the country, Heppner is now the anchor case on generative AI and legal privilege. For related reading on this tension, see our explainer on how AI tools can waive attorney-client privilege.

Source: Elizabeth X. Guo, “United States v. Heppner,” Harvard Law Review Blog (March 2026).

How using ChatGPT can waive attorney-client privilege

YouPromptChatGPT / ClaudeLogged & retainedDISCOVERY

Using ChatGPT for sensitive legal work creates several disclosure risks at once. Each one can contribute to a waiver of privilege under the reasoning Judge Rakoff applied to Claude chats.

Every prompt, including prompts containing client data, is transmitted to a third-party AI platform and stored under the provider's terms of service.

Consumer AI products may use your inputs and outputs to train AI models unless you explicitly opt out of that use of AI.

Anthropic's privacy policy and comparable policies from OpenAI reserve the right to disclose user data in connection with litigation, regulatory inquiries, or safety reviews.

Logs are discoverable. Courts can and do order AI providers to produce them, which puts privileged communications on the record.

The Heppner court leaned heavily on the fact that using a public AI platform strips away any reasonable expectation of confidentiality. Client intent does not rescue the claim. The terms of service for public AI tools are explicit. The provider can read, retain, and share the data. That reality is fatal to a privilege claim.

The practical consequence is that self-directed AI use on a consumer AI platform is the fastest way to waive attorney-client privilege without realizing it. Legal departments should treat every chat with a public AI tool as if it could later appear as an exhibit.

What the Heppner ruling means for legal teams and legal practice

Client coachingFirm AI policyDiscovery prep

Generative AI is already woven into legal workflow. Lawyers draft with it, paralegals summarize with it, and clients brainstorm with it before meetings. The Heppner ruling does not ban that behavior. It just warns every legal team that the current way most people use AI has no privilege shield.

There are three immediate implications for legal practice:

Client coaching is now part of engagement

Clients must be told not to paste facts into public AI chatbots. If they already have, counsel needs to know before arguing any privilege issue.

Firm AI policies need a privilege section

Every policy should separate approved private AI tools from consumer AI tools, and require that sensitive legal work stay on approved platforms.

Discovery strategy now includes AI logs

Opposing parties can seek AI chat logs. Preservation notices should cover them, and your own workflow should minimize what ends up there.

Some bar associations are also beginning to treat reckless AI use as a competence issue and, in egregious cases, as a potential unauthorized practice of law concern when clients rely on AI for legal information that only a licensed attorney should give.

Using AI without waiving attorney-client privilege

SELF-DIRECTEDPublic AIPrivilege waivedHeppner outcomeATTORNEY-DIRECTED OR LOCALConfidentiality preservedData stays on device

Yes, but the path is narrower than most people assume. Two conditions matter most when evaluating privilege for generative AI.

Attorney-directed AI use

Kovel doctrine analog

When counsel directs AI use the same way counsel directs an accountant, an interpreter, or a forensic expert, the AI tool can operate inside the attorney-client relationship. Courts have left this door open for future cases.

Architectural confidentiality

Private AI on your device

If your AI processes client data locally and never sends it to a third-party AI platform, the confidentiality prong of privilege analysis gets much stronger. No transmission means no disclosure.

Heppner lost most clearly on the confidentiality prong. A private AI that runs on your device directly solves that weakness. It is the single change that shifts AI use from a waiver risk to a defensible workflow.

The safer way: private AI for legal work

Local LLMsSmart RedactionOn-deviceNo cloud calls

The Heppner ruling leaves legal teams with a practical problem. They need the speed of generative AI without giving up the reasonable expectation of confidentiality that privilege requires. Private AI that runs locally and redacts sensitive content before any cloud call is the way through.

What Elephas solves, and what it doesn't

Elephas is not a lawyer, and no AI can give you legal advice. What Elephas solves is the one prong that destroyed Heppner's privilege claim: confidentiality. The other two prongs, attorney involvement and the purpose of obtaining legal advice from counsel, still require a licensed attorney directing the work. Used alongside your lawyer's workflow, Elephas removes the third-party disclosure that is currently the single biggest barrier to AI use in privileged work.

Elephas: a privacy-friendly AI knowledge assistant for legal teams

Built-in local LLM models

Elephas provides built-in local LLM models so you can run AI entirely on your Mac. Your prompt, your client data, and the AI output never leave the device, which removes the third-party AI platform problem at the architecture level.

Smart Redaction for cloud calls

When you need the power of a frontier cloud model, Smart Redaction automatically detects and redacts sensitive information from your prompts before they are sent to the cloud, then reinserts the original data into the response locally so you never have to worry about leaking confidential information.

Private knowledge bases per matter

You can build a Super Brain per client or per case, keep privileged legal files inside it, and search that knowledge with artificial intelligence without ever syncing it anywhere else.

For a broader roundup, see our guide to the best private AI tools for lawyers. Elephas is the tool designed specifically around the concerns Heppner surfaced: no forced third-party disclosure, no data retention you do not control, no dependence on a public AI tool's privacy policy.

Public AI vs private AI: a privilege-safety comparison

When legal teams evaluate AI tools, this is the comparison that actually matters for privilege analysis.

CriteriaPublic AI (ChatGPT, Claude)Private AI (Elephas)
Where prompts are processed
Cloud servers
On your Mac
Third-party disclosure
Yes
No
Data retention you control
Partial
Full
Zero data retention guarantee
Enterprise-only, limited
Inherent to local AI
Reasonable expectation of confidentiality
Weak (per Heppner)
Strong
Chat logs discoverable
Yes
Not externally stored

A private AI solves the exact weaknesses the Heppner court identified. The privacy comes from the architecture itself, with no vendor promise to audit or enforce.

Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Close the confidentiality gap in AI-assisted legal work

Elephas is a privacy-friendly AI knowledge assistant with built-in local LLM models and Smart Redaction. Your prompts, your client data, and the outputs stay on your Mac, which removes the third-party disclosure that defeats privilege. Pair it with your attorney's workflow.

Elephas privacy-friendly AI knowledge assistant for legal teams
Try Elephas

Built-in local LLM models. No credit card required.