OpenAI just dropped a policy manifesto that reads like a Silicon Valley fever dream crossed with 1930s New Deal rhetoric. The company released a series of recommendations Monday outlining how governments should handle the economic chaos it believes AI will inevitably create. And honestly, it’s hard to know whether to take it seriously or laugh at the audacity.
The core argument is straightforward enough: AI is advancing so rapidly that we’re heading toward “superintelligence,” and nobody really knows what happens next. OpenAI frames this as a call for democratic governance of AI’s future. But the actual proposals? They’re ambitious to the point of being almost unrecognizable in today’s political landscape.
The Wealth Fund Gambit
The headline-grabbing idea is a public wealth fund. Picture this: governments and AI companies team up to invest in long-term assets tied to the AI boom, then distribute the returns directly to citizens. It’s essentially a sovereign wealth fund model, the kind Norway uses with oil revenues, except applied to the technology sector.
It’s not a bad idea in theory. You’re essentially saying, “Hey, this technology is generating massive value. Let’s make sure regular people get a piece of it.” The mechanics are straightforward. The politics are another story entirely.
Taxes, Workweeks, and the Automation Question
OpenAI isn’t stopping at wealth redistribution. The policy document calls for a modernized tax system that shifts away from labor income and payroll taxes toward corporate income and capital gains. There’s also a proposal for taxes specifically on automated labor, which is where things get interesting (and potentially complicated).
The logic is clear: if AI replaces human workers, the tax base traditionally built on wages collapses. Taxing automation itself is one way to maintain government revenue. But defining what counts as “automated labor” in practice? That’s a legislative nightmare waiting to happen.
Then there’s the four-day workweek proposal. OpenAI suggests governments encourage and incentivize employers to experiment with shorter weeks while maintaining full pay. They’d even offer “benefits bonuses” tied to productivity gains from AI tools. It’s a creative attempt at wealth-sharing without layoffs, but it assumes employers would voluntarily shrink work hours rather than just cut jobs outright.
The Energy Problem Nobody Wants to Talk About
Here’s something that gets less attention: OpenAI is calling for an accelerated expansion of the US electricity grid. AI data centers are already straining power infrastructure, and demand will only spike as models get larger and more powerful. This is the unsexy part of the AI boom nobody’s prepared for, but it’s just as critical as the policy proposals.
Why This Matters Right Now
OpenAI isn’t operating in a vacuum. Fears about AI-driven job losses are already shaping markets and corporate decisions. In February, a hypothetical report about AI-triggered economic disruption sparked a genuine stock market selloff. Major software companies have watched their valuations tank in what’s been dubbed the “SaaSpolcalypse,” as enterprises consider replacing expensive software subscriptions with AI tools. Block and Atlassian have already cited AI as part of their recent layoff calculations.
OpenAI isn’t alone in sounding the alarm, either. Anthropic CEO Dario Amodei wrote last year that superintelligence would make the current global economic organization “no longer make sense.” OpenAI CEO Sam Altman has long championed Universal Basic Income, and more recently floated “Universal Basic Compute” as an alternative, where people receive AI computing power rather than cash.
These aren’t fringe voices. They’re the people building the technology.
The Credibility Paradox
Here’s the uncomfortable tension: OpenAI is essentially arguing that the technology it’s racing to develop will be so disruptive that society needs massive reforms to handle it. That’s either admirable transparency or convenient cover-your-ass. Maybe both.
The company frames these as “initial ideas” to address disruption risks, emphasizing the need for democratic input and a “real power to shape the AI future they want.” But there’s an odd asymmetry here. OpenAI doesn’t need anyone’s permission to build increasingly powerful AI systems. The policy proposals feel a bit like offering nutritional advice while selling junk food.
Will Anything Actually Happen?
That’s the real question. These proposals require political will, legislative action, and coordination between governments and private companies. Getting the US Congress to agree on a unified approach to AI taxation and workforce policy? The odds aren’t great. Add in the international dimension (AI doesn’t respect borders), and the complexity multiplies.
Yet ignoring the proposals entirely seems reckless. Whether or not OpenAI’s specific ideas become policy, the underlying challenge is real: rapid technological change does create economic disruption, and waiting until millions are unemployed to figure out a response is a losing strategy.
The tech industry has a habit of moving fast and breaking things. Sometimes “things” means consumer privacy. Sometimes it means entire job categories. If OpenAI’s policy recommendations do nothing else, they’re forcing a conversation about what happens when you break the economy.


