AI Privacy IncidentApril 20, 2026

French prosecutors investigate X over Grok-generated child sexual abuse material

Vendor: xAI / X Corp
Product: Grok (on X)
Severity: critical
Status: ongoing
Users affected: undisclosed

Summary

On April 20, 2026, Elon Musk and X chief executive Linda Yaccarino were summoned for voluntary questioning by the Paris prosecutor's office over the use of Grok, X's AI image tool, to generate sexualized images of non-consenting people, including minors. Musk did not appear. Prosecutors said the absence would not halt the case. The underlying criminal investigation, opened after a February raid on X's Paris offices by gendarmes and Europol, remains active.

What happened

  • French authorities allege that X's Grok image model was used to produce child sexual abuse material in response to user prompts, and that the platform continued to distribute the resulting imagery.
  • In February 2026, French gendarmes and Europol raided X's Paris offices as part of the investigation.
  • On April 20, 2026, Musk and Yaccarino were summoned as part of voluntary interviews; other X employees were called as witnesses.
  • Musk did not appear for his scheduled interview. Prosecutors confirmed the case would continue.
  • The Paris prosecutor's office said evidentiary materials were being shared with the U.S. Department of Justice and with state prosecutors in California and New York.

Timeline

  • 2026-02 - French gendarmes and Europol raid X's Paris offices.
  • 2026-04-20 - Musk and Yaccarino summoned; Musk does not appear.
  • 2026-04-20 - Paris prosecutor's office confirms cross-border coordination with U.S. authorities.

What remains unclear

  • X has not publicly responded to the summons or the underlying allegations.
  • The volume of Grok-generated imagery at issue has not been disclosed.
  • The technical mechanism by which the prompts bypassed X's stated safety controls has not been described in the public record.
  • Whether any xAI engineering or safety staff have been called as witnesses is not public.

Broader context

The Paris investigation is one of the first criminal cases in a major jurisdiction to treat a generative image service as an operator of the content its users produce. The legal question it raises - who is responsible when a model produces unlawful output from a user prompt - is distinct from the content-moderation debates that surrounded user-uploaded material. Prosecutors described the case as proceeding "in a constructive manner," language that signals a compliance-oriented outcome rather than a punitive one, but the cross-border evidentiary sharing indicates the matter is not confined to France.

Sources

Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Lovable Hacked: API Flaw Exposes Thousands of Projects on the Lovable AI App Builder

A security researcher exposed a Lovable API flaw that leaked source code, AI chat histories and database credentials across thousands of projects. Lovable denies data was breached; its apology reveals a February 2026 backend regression.

13 min read
news

ChatGPT Launches Ads as Privacy Researcher Resigns from OpenAI

A growing wave of AI safety researchers are leaving major companies as ChatGPT goes ad-supported.

6 min read
news

Claude Mythos Preview: First AI to Complete a 32-Step Autonomous Cyber Attack (AISI 2026)

The UK AI Security Institute evaluated Claude Mythos Preview and found the first AI model to autonomously complete a 32-step corporate network attack. Full analysis and defender guidance.

12 min read
news

Anthropic Leaked Their Source Code Twice in One Week

512,000+ lines of Claude Code leaked via npm. Days earlier, 3,000 internal files were publicly accessible. Unreleased features, security risks, and what it means for AI privacy.

14 min read