AI Privacy IncidentMarch 31, 2026

OpenAI macOS signing pipeline compromise via Axios supply chain

Vendor: OpenAI
Product: ChatGPT Desktop, Codex, Atlas (macOS)
Severity: high
Status: confirmed-resolved
Users affected: all macOS users of ChatGPT Desktop, Codex, and Atlas; older builds stopped functioning after 2026-05-08

Summary

On March 31, 2026, OpenAI's GitHub Actions workflow for notarizing macOS applications executed a malicious version of the Axios JavaScript library during a supply chain campaign that Socket attributed to North Korean actors. The compromised pipeline held code-signing certificates for ChatGPT Desktop, Codex, and Atlas. OpenAI disclosed the incident on April 11, 2026, revoked the affected certificates, rebuilt the macOS applications, and coordinated with Apple to block notarization attempts using the previous certificate. The company stated it found no evidence that user data or production software were compromised.

What happened

  • A malicious version of Axios (1.14.1) executed inside OpenAI's GitHub Actions workflow on March 31, 2026, during a supply chain campaign tracked by Socket.
  • The workflow was the one OpenAI used to notarize macOS applications and held code-signing certificates for ChatGPT Desktop, Codex, and Atlas.
  • OpenAI rotated the affected certificates, rebuilt the macOS applications with new credentials, and worked with Apple to block notarization using the previous certificate.
  • Users were required to update; older builds signed with the rotated certificate stopped functioning after May 8, 2026.
  • Socket published its writeup on April 11, 2026. Developer reaction was mixed on timing and scope.

Timeline

  • 2026-03-31 - Malicious Axios package executes in OpenAI's macOS signing workflow.
  • 2026-04-11 - OpenAI publishes disclosure; Socket publishes research writeup.
  • 2026-05-08 - Older builds signed with the rotated certificate stop functioning.

What the vendor has confirmed

OpenAI described the root cause as "a misconfiguration in the GitHub Actions workflow" that pinned Axios to a floating tag rather than a specific commit hash and did not enforce version-age validation. The company said the signing certificates for the three macOS applications were treated as potentially compromised and were rotated, and that Apple assisted in blocking notarization attempts using the previous certificate. OpenAI said it found "no evidence that user data, internal systems, or production software were compromised."

Broader context

A signed desktop AI client inherits the security posture of every dependency its vendor's build pipeline pulls in. The failure mode at play here - a floating package reference resolving to a newly published malicious version during a CI run with privileged credentials - is not specific to AI products, but it carries higher stakes when the compromised output is the signing material for software that handles personal or business-sensitive files on an end user's machine.

Sources

Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Lovable Hacked: API Flaw Exposes Thousands of Projects on the Lovable AI App Builder

A security researcher exposed a Lovable API flaw that leaked source code, AI chat histories and database credentials across thousands of projects. Lovable denies data was breached; its apology reveals a February 2026 backend regression.

13 min read
news

ChatGPT Launches Ads as Privacy Researcher Resigns from OpenAI

A growing wave of AI safety researchers are leaving major companies as ChatGPT goes ad-supported.

6 min read
news

Claude Mythos Preview: First AI to Complete a 32-Step Autonomous Cyber Attack (AISI 2026)

The UK AI Security Institute evaluated Claude Mythos Preview and found the first AI model to autonomously complete a 32-step corporate network attack. Full analysis and defender guidance.

12 min read
news

Anthropic Leaked Their Source Code Twice in One Week

512,000+ lines of Claude Code leaked via npm. Days earlier, 3,000 internal files were publicly accessible. Unreleased features, security risks, and what it means for AI privacy.

14 min read