OpenAI macOS signing pipeline compromise via Axios supply chain
Summary
On March 31, 2026, OpenAI's GitHub Actions workflow for notarizing macOS applications executed a malicious version of the Axios JavaScript library during a supply chain campaign that Socket attributed to North Korean actors. The compromised pipeline held code-signing certificates for ChatGPT Desktop, Codex, and Atlas. OpenAI disclosed the incident on April 11, 2026, revoked the affected certificates, rebuilt the macOS applications, and coordinated with Apple to block notarization attempts using the previous certificate. The company stated it found no evidence that user data or production software were compromised.
What happened
- A malicious version of Axios (1.14.1) executed inside OpenAI's GitHub Actions workflow on March 31, 2026, during a supply chain campaign tracked by Socket.
- The workflow was the one OpenAI used to notarize macOS applications and held code-signing certificates for ChatGPT Desktop, Codex, and Atlas.
- OpenAI rotated the affected certificates, rebuilt the macOS applications with new credentials, and worked with Apple to block notarization using the previous certificate.
- Users were required to update; older builds signed with the rotated certificate stopped functioning after May 8, 2026.
- Socket published its writeup on April 11, 2026. Developer reaction was mixed on timing and scope.
Timeline
- 2026-03-31 - Malicious Axios package executes in OpenAI's macOS signing workflow.
- 2026-04-11 - OpenAI publishes disclosure; Socket publishes research writeup.
- 2026-05-08 - Older builds signed with the rotated certificate stop functioning.
What the vendor has confirmed
OpenAI described the root cause as "a misconfiguration in the GitHub Actions workflow" that pinned Axios to a floating tag rather than a specific commit hash and did not enforce version-age validation. The company said the signing certificates for the three macOS applications were treated as potentially compromised and were rotated, and that Apple assisted in blocking notarization attempts using the previous certificate. OpenAI said it found "no evidence that user data, internal systems, or production software were compromised."
Broader context
A signed desktop AI client inherits the security posture of every dependency its vendor's build pipeline pulls in. The failure mode at play here - a floating package reference resolving to a newly published malicious version during a CI run with privileged credentials - is not specific to AI products, but it carries higher stakes when the compromised output is the signing material for software that handles personal or business-sensitive files on an end user's machine.
Sources
- Socket.dev research writeup (research)
- OpenAI statement (quoted via Socket.dev) (vendor-disclosure)
