French prosecutors investigate X over Grok-generated child sexual abuse material
Summary
On April 20, 2026, Elon Musk and X chief executive Linda Yaccarino were summoned for voluntary questioning by the Paris prosecutor's office over the use of Grok, X's AI image tool, to generate sexualized images of non-consenting people, including minors. Musk did not appear. Prosecutors said the absence would not halt the case. The underlying criminal investigation, opened after a February raid on X's Paris offices by gendarmes and Europol, remains active.
What happened
- French authorities allege that X's Grok image model was used to produce child sexual abuse material in response to user prompts, and that the platform continued to distribute the resulting imagery.
- In February 2026, French gendarmes and Europol raided X's Paris offices as part of the investigation.
- On April 20, 2026, Musk and Yaccarino were summoned as part of voluntary interviews; other X employees were called as witnesses.
- Musk did not appear for his scheduled interview. Prosecutors confirmed the case would continue.
- The Paris prosecutor's office said evidentiary materials were being shared with the U.S. Department of Justice and with state prosecutors in California and New York.
Timeline
- 2026-02 - French gendarmes and Europol raid X's Paris offices.
- 2026-04-20 - Musk and Yaccarino summoned; Musk does not appear.
- 2026-04-20 - Paris prosecutor's office confirms cross-border coordination with U.S. authorities.
What remains unclear
- X has not publicly responded to the summons or the underlying allegations.
- The volume of Grok-generated imagery at issue has not been disclosed.
- The technical mechanism by which the prompts bypassed X's stated safety controls has not been described in the public record.
- Whether any xAI engineering or safety staff have been called as witnesses is not public.
Broader context
The Paris investigation is one of the first criminal cases in a major jurisdiction to treat a generative image service as an operator of the content its users produce. The legal question it raises - who is responsible when a model produces unlawful output from a user prompt - is distinct from the content-moderation debates that surrounded user-uploaded material. Prosecutors described the case as proceeding "in a constructive manner," language that signals a compliance-oriented outcome rather than a punitive one, but the cross-border evidentiary sharing indicates the matter is not confined to France.
