AI Policy · 12 min read

OpenAI Just Published Its Biggest Policy Document Ever. Here's What It Says and What It Doesn't.

Most AI policy documents read like legal disclaimers. This one reads like a pitch to rebuild America from the ground up. OpenAI quietly dropped a 13-page paper in April 2026 that proposes everything from a national wealth fund to containment playbooks for AI systems that can copy themselves.

It invokes the New Deal. It talks about 4-day workweeks. It names itself as part of the problem, once, on page five, and then spends the remaining twelve pages positioning itself as the solution.

What makes this document worth reading is not just what OpenAI said. It is what they very carefully chose not to say. The gaps tell a different story than the proposals, and that story is the one you should pay attention to.

13

Pages in the policy document

20

Specific proposals

$6.6B

Recent funding round

8

Topics conspicuously absent

Executive Summary

  • OpenAI published a 13-page policy document titled “Industrial Policy for the Intelligence Age,” containing 20 specific proposals for restructuring America's economy and safety infrastructure around AI.
  • The document proposes a Public Wealth Fund, 32-hour/4-day workweek pilots, treating AI access as a public utility, and a complete overhaul of the tax system.
  • On the safety side, OpenAI calls for containment playbooks for AI systems that can “autonomously replicate themselves,” an aviation-style incident reporting system, and a global network of AI safety institutes modeled on the IAEA.
  • The document never mentions open source, copyright, artist compensation, algorithmic bias, privacy (as a standalone topic), consent, or gig workers. Every omission protects an OpenAI business interest.
  • OpenAI published this during its for-profit conversion, after a $6.6 billion funding round, while opening a permanent policy office in Washington, DC.

The Big Bet: Superintelligence Is Not a Maybe, It's a When

The Big Bet: AI Capability Trajectory and the 5 Risks OpenAI Admits

The entire document rests on a single claim: we are moving toward superintelligence. Not as a distant theoretical possibility, but as something already underway.

OpenAI defines superintelligence as AI that outperforms the smartest humans even when those humans are assisted by AI. That's a high bar. And they frame the trajectory in concrete terms: AI has gone from helping with tasks that take minutes to tasks that take hours. Next, they expect AI to handle projects that currently take people months.

The promise, if this plays out, is significant. Lower costs for families. Medical and scientific breakthroughs. New kinds of work and creativity. The document compares this moment to the arrival of electricity, the combustion engine, and mass production.

But OpenAI doesn't pretend the path is smooth. They list five risks explicitly:

  1. Mass job and industry disruption at a speed unlike any previous shift
  2. Bad actors misusing AI for cyberattacks and biological weapons
  3. AI systems going beyond human control (alignment failure)
  4. Governments and institutions using AI against democratic values
  5. Wealth and power concentrating in a few companies

On that last point, they name themselves. “There is also a risk that the economic gains concentrate within a small number of firms like OpenAI.” It's the only time in 13 pages they acknowledge being part of the problem.

Their answer to all five risks is a new industrial policy built on three principles: share prosperity broadly, mitigate risks, and democratize access to AI. They invoke the Progressive Era and the New Deal as proof that democratic societies have handled transformations this large before.

Takeaway: OpenAI thinks superintelligence is inevitable and arriving fast. It could be extraordinary or devastating, depending on what governments do now.

11 Ideas to Stop AI From Making Only the Rich Richer

11 Economic Proposals: Building an Open Economy — Your Job, Your Money, Your Access, The Infrastructure

The first half of the document proposes eleven ideas for keeping the AI economy open and broadly beneficial. They're best understood in four groups.

Your Job

Worker Perspectives. When a company deploys AI, workers should have a formal seat at the table. They're the ones who know how work actually gets done, where AI helps, and where it causes harm. The proposal includes clear limits on AI that intensifies workloads, narrows autonomy, or undermines fair scheduling and pay.

Efficiency Dividends. This is the 4-day workweek proposal. If AI makes a company more productive, that gain should go back to workers, not just shareholders. The proposal calls for 32-hour, four-day workweek pilots with no loss in pay. If output and service levels hold steady, convert the reclaimed hours into a permanent shorter week, bankable paid time off, or both. On top of that: bigger retirement contributions, a larger employer share of healthcare costs, and subsidized childcare and eldercare.

Portable Benefits. In an AI economy, people will change jobs and roles more frequently. Your healthcare, retirement savings, and skills training shouldn't vanish every time you switch employers. The proposal calls for portable accounts attached to the individual, pooling contributions from multiple sources, following you across jobs, industries, and even into entrepreneurship.

Your Money

Modernize the Tax Base. This is arguably the most important proposal in the entire document. As AI shifts profits from human labor to capital and machines, payroll and income taxes shrink. That's the money funding Social Security, Medicaid, SNAP, and housing assistance. If those revenue sources dry up, core programs collapse.

The solution: rebalance toward capital-based revenues. Higher taxes on capital gains at the top, corporate income taxes, targeted measures on sustained AI-driven returns, and possibly new taxes on automated labor itself. Pair those with wage-linked incentives so companies are rewarded for retaining and retraining workers instead of just replacing them.

Public Wealth Fund. Every citizen gets a stake in AI-driven economic growth. This is not Universal Basic Income. It's an investment fund, seeded by AI companies and the broader set of firms adopting AI. The fund invests in diversified, long-term assets. Returns could be distributed directly to citizens, regardless of whether they own stocks or have access to financial markets. Think of Norway's sovereign wealth fund, but for America, funded by the AI boom.

Your Access

Right to AI. This is the most radical proposal in the document. OpenAI argues that AI should be treated like electricity or the internet: foundational infrastructure for participating in the modern economy. Free or low-cost access points in libraries, schools, and underserved communities. OpenAI even acknowledges that the internet was never fairly distributed and says we should learn from that failure rather than repeat it.

AI-First Entrepreneurs. Use AI to handle the overhead that blocks regular people from starting businesses: accounting, marketing, procurement. Pair that with microgrants, revenue-based financing, model contracts, and shared back-office infrastructure. The vision: a nurse or a mechanic with deep domain expertise should be able to launch a business without an MBA.

The Infrastructure

Accelerate Grid Expansion. AI data centers consume enormous energy, and the grid isn't ready. New public-private partnerships to build energy infrastructure. OpenAI says data centers should “pay their own way” on energy so households aren't subsidizing them.

Adaptive Safety Nets. If AI disruption hits a region or industry hard, expanded benefits should activate automatically. Expanded unemployment, fast cash assistance, wage insurance, and training vouchers, all triggered by real-time metrics on job loss and sectoral disruption. When the crisis passes, support scales back down. No waiting for Congress to debate emergency packages.

Pathways into Human-Centered Work. AI automates information work. Humans should move into the work that requires human connection: childcare, eldercare, education, healthcare, community services. Government builds training pipelines, supports transitions into care roles, and incentivizes employers to raise pay in sectors facing chronic shortages.

Accelerate Scientific Discovery. Build a distributed network of AI-powered laboratories that can test and validate hypotheses at scale. The key detail: deploy them broadly, across universities, community colleges, hospitals, and regional research hubs. Not just MIT and Stanford. Democratize who gets to do science and where breakthroughs happen.

The 4-day workweek and Public Wealth Fund will generate the most headlines. But the tax modernization proposal is the one that actually matters most. Without it, the social programs holding the economy together will slowly starve as AI shifts value from labor to capital.

Takeaway: OpenAI proposes that AI's productivity gains should flow to workers in time and money, that everyone should have baseline access to AI, and that the tax system needs a fundamental redesign before it quietly collapses.

9 Ideas to Stop AI Before It Stops Listening to Us

9 Safety Proposals: Building a Resilient Society — Catching Problems, Worst-Case Scenarios, Who Controls AI?

The second half addresses safety, governance, and resilience. Nine proposals, three groups.

Catching Problems

Safety Systems for Emerging Risks. Tools to detect and prevent AI misuse in cyber attacks, bioweapons, and other high-consequence domains. The novel mechanism here: advance-purchase commitments, where the government pre-orders safety solutions to create a market incentive for companies to build them. Safety becomes a product category, not just a cost.

AI Trust Stack. How do you know if something was made by AI? The proposal calls for cryptographic signatures for AI-generated content and AI-issued instructions. Privacy-preserving audit logs that support investigation without creating mass surveillance. And governance frameworks that clarify who is accountable when AI systems cause harm.

Auditing Regimes. Independent auditors evaluating AI systems for safety risks. Tiered regulation, meaning strict oversight for frontier models like GPT-class systems, lighter rules for smaller systems and the startups building on them. Standards designed for international adoption to reduce fragmentation. Insurance frameworks to create market pressure for safety.

Worst-Case Scenarios

Model-Containment Playbooks. This is the most alarming section in the document. OpenAI describes three scenarios that require coordinated containment: model weights that have been released into the world, developers unwilling to restrict access to dangerous capabilities, and, most strikingly, “systems that are autonomous and capable of replicating themselves.” They propose containment playbooks modeled on public health crisis response. Even when full containment isn't possible, coordinated action can reduce impact.

Incident Reporting. An aviation-style system for AI. Companies share incidents, misuse, and near-misses with a designated public authority. The critical detail: this includes “cases where models exhibited concerning internal reasoning, unexpected capabilities, or other warning signals,” even if safeguards prevented harm. Learning from close calls before they become real disasters. Prevention over punishment.

Who Controls AI?

Mission-First Corporate Governance. AI companies should adopt governance structures like Public Benefit Corporations that embed public-interest accountability — a point that gains urgency as safety leaders resign from major AI labs. Audit models for “manipulative behaviors or hidden loyalties.” Prevent any individual or internal faction from quietly using AI systems to concentrate power. Share benefits broadly through philanthropy and charitable commitments.

Guardrails for Government Use. Clear rules, codified in law, for how governments can and cannot use AI. Every AI-assisted government decision should leave an auditable digital trail. Modernize FOIA so citizens and watchdog organizations can investigate how AI influenced government actions. AI-interaction logs and agentic action logs classified as federal records.

Mechanisms for Public Input. AI alignment shouldn't be decided only by engineers and executives behind closed doors. Developers should publish model specifications describing what their systems are supposed to do and how they're evaluated. Governments anchor standards in democratic laws. The public gets structured input through representative processes.

International Information-Sharing. A global network of AI safety institutes sharing evaluation results, alignment findings, and emerging risks. Shared protocols for joint evaluations and coordinated mitigation. Antitrust safe harbors so companies can share safety-critical information without legal risk. Think of it as an IAEA for artificial intelligence.

The model-containment section deserves a second read. OpenAI is writing playbooks for scenarios where AI systems replicate themselves and cannot be recalled. If that scenario is plausible enough to plan for, there's a question that precedes every policy proposal: why are we building it in the first place?

Takeaway: OpenAI has a plan for safety, from auditing to international coordination. But the fact that they need containment playbooks for self-replicating AI should make everyone pause.

The Silence Is Louder Than the Policy

What OpenAI Didn't Say: 8 Missing Topics — Open Source, Copyright/Creators, Bias & Fairness, Privacy, China & Geopolitics, Consent, Gig Workers, Frontier Access

What a company chooses not to say in a 13-page manifesto is as revealing as what it does say. Here are the topics that don't appear:

Open source. The company called “Open” AI published 13 pages about AI access and democratization without once mentioning open-source models. In a document about making AI broadly accessible, this is a glaring omission.

Copyright and creators. Artists, writers, musicians, and photographers whose work trained these models are absent. No discussion of compensation, consent, or data rights. Not a single sentence.

Bias, discrimination, and fairness. A 13-page AI policy document that never uses the words “bias,” “discrimination,” or “fairness.” Zero proposals addressing algorithmic harm to specific communities.

Privacy. The word appears twice, both times in passing. No standalone privacy proposal in a document about embedding AI into every institution, government workflow, and public service.

China and geopolitics. A US-focused AI policy paper in 2026 that completely ignores the global AI race. No mention of competition, export controls, or geopolitical strategy.

Consent. Should people consent to AI being trained on their data? To AI making decisions about their loans, medical diagnoses, or job applications? Not discussed.

Gig workers. The proposals address traditional employment. The growing freelance and gig workforce, arguably the most exposed to AI displacement, is invisible.

The access hierarchy. OpenAI proposes broad access to “foundational” AI models, the baseline. But “frontier” models, the most powerful systems, stay behind strict controls and enterprise pricing. Everyone gets economy class. First class stays locked. This protects OpenAI's competitive position while sounding egalitarian.

The absences aren't random. They cluster around topics where OpenAI's business interests conflict with the public good. Open source reduces their moat. Copyright creates legal liability. Bias requires internal auditing they'd rather not publicize. Privacy restricts data collection. Each silence protects something specific.

Takeaway: This document has blind spots, and they're all in places where OpenAI's profits might be at risk.

Is This a Public Good or a Power Play?

The Strategic Timing Question: Public Good vs Power Play — Both can be true

The timing of this document is worth noting. OpenAI published it during its conversion from a capped-profit to a for-profit corporation. After raising $6.6 billion. Alongside the Stargate data center infrastructure project. While opening a permanent workshop space in Washington, DC. Compare this with Anthropic's own policy changes happening in parallel.

The document proposes governance frameworks that closely mirror OpenAI's own corporate structure. It suggests tiered regulation that would burden smaller competitors with compliance costs while entrenching incumbents. It positions OpenAI as the responsible adult in the room, the company that took the time to think about all of this.

OpenAI names itself as part of the problem once, on page 5. Every other mention positions it as the solution.

The honest assessment: it's both a public good and a power play. Many of these proposals are genuinely ambitious and needed to be said by someone with enough influence to be heard. A Public Wealth Fund. A 4-day workweek. AI treated as a public utility. Automatic safety nets. These are ideas that can improve lives regardless of who proposed them.

But the document also serves OpenAI's interests. It defines the rules of the game in terms favorable to their position. It creates regulatory moats. It wraps corporate strategy in the language of public service.

Regardless of motive, these 20 proposals are now in the public conversation. The question isn't whether OpenAI is sincere. The question is whether these ideas are good enough to survive contact with democratic debate. Judge the ideas on their merits. But don't forget who wrote them.

Takeaway: Judge the ideas, not the messenger. But don't forget who the messenger is.

Want AI That Keeps Your Data Private?

While OpenAI proposes policies for the future, Elephas protects your privacy today. Local processing, no cloud uploads, works across every Mac app.

Try Elephas Free
Selvam Sivakumar
Written by

Selvam Sivakumar

Founder, Elephas.app

Selvam Sivakumar is the founder of Elephas and an expert in AI, Mac apps, and productivity tools. He writes about practical ways professionals can use AI to work smarter while keeping their data private.

Related Resources

news

Anthropic Leaked Their Source Code Twice in One Week

512,000+ lines of Claude Code leaked via npm. Days earlier, 3,000 internal files were publicly accessible. Unreleased features, security risks, and what it means for AI privacy.

14 min read
news

Anthropic Claude Mythos Leak: New AI Model Tier Represents 'Step Change' in Capabilities

Anthropic accidentally leaked details of Claude Mythos, a new model tier above Opus with dramatically higher coding, reasoning, and cybersecurity scores. Here's what we know.

7 min read
news

Anthropic Sues US Government Over Supply Chain Risk Label

Anthropic filed two lawsuits against the US government after the Pentagon labeled it a supply chain risk for refusing to remove AI safety restrictions from military contracts. Full timeline, legal claims, and industry impact.

18 min read
news

Anthropic Sues Pentagon, 1.5 Million Quit ChatGPT: The AI Trust Crisis Reshaping the Industry

Anthropic filed two lawsuits against the Pentagon over its supply chain risk designation. Meanwhile, 1.5 million users quit ChatGPT in the QuitGPT boycott. Here's what it means for AI privacy.

14 min read
← Back to News