Photo by Louis Velazquez on Unsplash
OpenAI Wrote the Rules. Now It Wants to Tax Them.
/ 15 min read
Table of Contents
The trillion-dollar company that wants to tax itself
On April 6, 2026, OpenAI released a 13-page policy paper titled “Industrial Policy for the Intelligence Age,” and within hours, Sam Altman’s vision for America’s AI future had become the most contentious document in tech. The timing was not accidental. Three days earlier, OpenAI had closed a $122 billion funding round that valued the company at $852 billion, cementing its position as the most valuable artificial intelligence company on earth. The investors behind that historic round — Amazon at $50 billion, NVIDIA at $30 billion, SoftBank at $30 billion, plus $3 billion from retail investors — had just committed more capital to a single company than most nations spend on defense.
By any measure, OpenAI had won. The company is generating $2 billion in monthly revenue, pulling in $20 billion across all of 2025, and projecting $29.4 billion for 2026. It targets a Q4 2026 IPO at approximately $1 trillion valuation. And yet the paper argued something radical: that this wealth, and the far greater wealth to come from artificial general intelligence, should be distributed through mechanisms including public wealth funds, robot taxes, and a fundamental restructuring of how Americans work and pay taxes.
The question haunting every executive who read the paper was unavoidable: had OpenAI just written the most cynical regulatory capture document ever produced, or the most earnest plea for systemic change from the inside? And perhaps more pressingly — was there any meaningful difference between the two?
Consider the landscape in which the paper landed. Axios framed the proposals as a “superintelligence New Deal”, invoking the most ambitious federal policy framework in American history. The Roosevelt comparison was deliberate. OpenAI’s paper argued that superintelligence — AI that exceeds human capability across all cognitive domains — will reshape labor markets, wealth distribution, and political power more thoroughly than the Industrial Revolution reshaped agrarian economies. If that premise holds, the current tax code, social safety net, and employment law become artifacts of a vanishing world. The paper positioned itself as the map for what comes next.
The scale of the economic transformation OpenAI is forecasting deserves careful scrutiny. The company’s own revenue trajectory illustrates the acceleration: from $3.7 billion in 2024 to $20 billion in 2025 to a projected $29.4 billion in 2026. That tripling in two years happened before most enterprises had deployed AI at scale. If the broader economy follows a similar adoption curve — and the Q1 2026 venture capital numbers suggest it will, with $300 billion flowing into AI-adjacent startups in a single quarter — the displacement effects the paper warns about could materialize faster than any policy framework can respond. The paper is, in effect, an admission by the industry’s leading company that the technology it is building will break the economy’s fundamental operating assumptions. Coming from any other source, this would be alarming. Coming from OpenAI, it is both alarming and self-serving.
But maps drawn by conquerors tend to favor the conqueror. OpenAI is not a neutral observer proposing technocratic solutions to a systemic problem. It is the company most likely to build superintelligence, most likely to profit from its deployment, and most likely to face regulatory constraints that could slow its development. Every line of the policy paper must be read through that lens. The proposals are not disinterested scholarship. They are strategy.
Robot taxes, wealth funds, and the 32-hour week
The specifics of OpenAI’s proposal rest on a deceptively simple premise: artificial intelligence will generate vast wealth, but that wealth will concentrate unless governments intentionally redistribute it. TechCrunch’s breakdown of the core mechanisms detailed a framework that would fundamentally alter the relationship between labor, capital, and the state.
Start with the robot tax. Rather than taxing labor — which the paper argues will become scarce as automation accelerates — governments should impose levies on AI and robotic systems that displace human workers. The concept mirrors Bill Gates’s 2017 proposal but scales it to an economy where Vinod Khosla, one of OpenAI’s earliest investors, predicts 80 percent of current jobs will be automatable by 2030. The revenue would capitalize a national public wealth fund — an American version of the model that already exists in Alaska, where the Permanent Fund paid residents $1,000 each in 2025 by investing oil royalties and distributing returns. OpenAI envisions every citizen holding an ownership stake in AI-driven economic growth, with dividends flowing directly to households.
The quantitative logic is compelling when you stitch together the available data. If AI-related productivity gains increase global GDP by even 15 percent over the next decade — a conservative estimate given current trajectories — that represents roughly $20 trillion in additional output. A 2 percent automation tax on that gain could generate $400 billion annually, enough to pay every American adult approximately $1,600 per year. Scale the productivity assumption to 25 percent, and the annual dividend approaches $2,700. Neither figure replaces a salary, but as a supplement atop existing safety nets, the numbers are not trivial. The Alaska model proves the mechanism works; the question is whether it can scale to a continental economy powered by intelligence rather than petroleum.
Next comes the four-day workweek. OpenAI backs government-incentivized pilots of 32-hour schedules at full pay, arguing that as AI handles routine cognitive tasks, workers should retain purchasing power while surrendering hours rather than wages. The paper calls this an “efficiency dividend” — productivity gains translated into time rather than extracted as profit. Portable benefits decoupled from employment status (healthcare, retirement, training accounts) would travel with workers rather than anchoring them to employers.
Then the tax code inversion. Fortune documented Altman’s alignment with Khosla on eliminating federal income tax for anyone earning under $100,000 — effectively removing the entire working class from the income tax base. The revenue shortfall would be recaptured by taxing capital gains at the same rate as ordinary income. This represents a generational reversal of tax policy logic: instead of funding government through wages, you fund it through returns on capital. In a world where AI generates most economic value and labor’s share of GDP shrinks, the argument has structural coherence. But it also means the companies generating AI-derived capital gains — companies like OpenAI — would become the primary revenue sources for government. That’s a dependency relationship with profound political implications.
Finally, the paper proposes automatic stabilizers. InvestmentNews reported that OpenAI envisions trigger-based safety nets that activate without congressional negotiation when unemployment exceeds thresholds or when AI displacement metrics cross predetermined levels. No legislative gridlock. No debate over who deserves assistance. Data-driven redistribution replacing democratic deliberation on the speed and scope of economic intervention. Income support, wage insurance, and direct cash payments would ramp up automatically when displacement metrics crossed predefined triggers, and wind down as labor market indicators recovered. The mechanism resembles how the Federal Reserve manages monetary policy through data-dependent rate adjustments — except applied to fiscal transfers affecting hundreds of millions of people.
The paper also proposed treating AI access as a public entitlement. Certain AI capabilities would become public infrastructure — available to all citizens either free or at cost, with government subsidizing access for low-income households, community institutions, and hourly workers. This inverts the current model where frontier AI is a luxury good priced for enterprise budgets. Quartz noted the proposal explicitly argued that pricing AI tools out of reach for economically marginalized groups would deepen the very inequality the paper claimed to address. The framing was aspirational: AI as literacy, AI as electricity, AI as a right rather than a product.
When the fox writes the henhouse rulebook
The immediate criticism from policy scholars was withering. Anton Leicht of the Carnegie Endowment for International Peace told Fortune that OpenAI’s proposal was “comms work to provide cover for regulatory nihilism” — the strategic deployment of progressive-sounding policy language to neutralize actual regulation. The timing supported this reading. OpenAI had just completed its $852 billion valuation round. Two of its board observers represented venture capital firms with direct financial incentives to resist meaningful governance of AI development. How convenient, critics noted, that a company proposing to tax AI usage had positioned itself to write the rules under which it would be taxed.
Gizmodo’s analysis characterized the paper as simultaneously utopian and strategically vague — utopian about what AI could achieve, conveniently silent on what would constrain the companies building it. If AI firms operate with minimal safety requirements while developing superintelligence (the paper’s implicit argument), and if governments shift redistribution entirely onto AI taxation rather than corporate regulation, what prevents the consolidation of decision-making power in a handful of AI monopolies? The paper says nothing about computational power concentration, data monopolies, or the political leverage that accrues to firms controlling the infrastructure of superintelligence.
The conflict of interest is structural, not personal. OpenAI’s API enables thousands of businesses to replace human writers, analysts, customer service representatives, and junior developers. Its call for a “New Deal” for the AI age positioned the company as a stakeholder in solving a problem it is actively creating. The intellectual move is elegant: we are not responsible for displacement; we are offering solutions. But the solutions primarily benefit firms that have already won the AI race. Smaller competitors, open-source developers, and non-AI companies absorb the tax burden while OpenAI operates under a framework it helped design.
The paper conveniently omitted any discussion of algorithmic transparency, corporate governance, or safety oversight. It avoided addressing the fact that OpenAI’s most profitable division — enterprise API access — generates enormous margins precisely because it concentrates computing power and training data in corporate hands. A public wealth fund sounds noble. But public wealth funds require public oversight. The paper is silent on what oversight would look like, or whether AI companies would submit to it.
There is also the matter of timing within the news cycle. A New Yorker investigation into Sam Altman’s conduct and financial dealings landed the same day as the policy paper, a coincidence that multiple observers noted blunted coverage of the less flattering story. Whether intentional or not, the effect was clear: every tech journalist’s attention pivoted from corporate governance to macro-economic policy. The paper won the news cycle.
The pattern is not new. Tobacco companies funded smoking cessation research. Oil companies established carbon credit markets. In each case, the company most responsible for a problem positioned itself as the indispensable partner in solving it, gaining influence over the regulatory framework while continuing the activity that created the harm. OpenAI’s policy paper fits this template with uncomfortable precision. The company proposes taxing AI deployment while simultaneously being the largest deployer. It proposes a public wealth fund while preparing an IPO that would make its early investors spectacularly wealthy. It proposes worker transition programs while its API eliminates the jobs those workers hold. The proposals are internally consistent and economically literate. They are also a masterclass in pre-emptive regulatory positioning.
The deepest concern, articulated by several analysts, is that by accepting OpenAI’s framing of what needs to be solved, policymakers inadvertently accept its prescriptions. The paper argues the core problem is wealth concentration due to AI. But the actual problem might be unaccountable corporations developing powerful technologies with minimal oversight. Those are not the same problem, and the solutions are very different.
What every operator should do before the rules arrive
Regardless of whether OpenAI’s policy paper becomes law, the fact of its publication signals something important: the rules governing AI are being written now, and the companies closest to power are shaping them. For operators — founders, investors, and executives building AI products — several implications follow.
The tax and regulatory landscape you operate in today is almost certainly temporary. The current regime of minimal AI taxation, few labor displacement guardrails, and loose safety requirements exists largely because government moves slower than technology. The Q1 2026 funding data makes regulatory change inevitable: investors poured $300 billion into startups globally, with $242 billion flowing to AI companies alone — 80 percent of total venture capital. Capital concentration at that scale produces lobbying power, which produces regulation. The question is whether you help shape it or merely react to it.
The labor displacement story is no longer hypothetical. If Khosla is right about 80 percent job automation, every AI company is part of that machinery. OpenAI’s establishment of a fellowship program offering up to $100,000 in grants and $1 million in API credits signals awareness that demonstrating social benefit will become a prerequisite for regulatory acceptance. Newsweek’s coverage of the robot tax proposal noted that the political feasibility of such measures depends entirely on whether the public perceives AI companies as part of the solution or the architects of the problem. Document the human impacts of your AI deployments now. Measure job displacement alongside productivity gains. Build transition programs before legislation forces you to retrofit them. The companies that moved early on environmental, social, and governance frameworks in 2015 had smoother regulatory transitions than those that waited until 2020. The same dynamic will play out in AI, compressed into a two-year window.
The geopolitical dimension adds urgency. China’s AI sector is not waiting for American policy debates to resolve. DeepSeek V4 launched this week on domestically produced Huawei chips, open-sourced under Apache 2.0, priced at a fraction of Western alternatives. If American AI regulation overreaches — taxing domestic deployment while Chinese competitors face no such burden — the result could be regulatory arbitrage at a civilizational scale. OpenAI’s paper implicitly acknowledges this tension by framing its proposals as pro-growth rather than restrictive. The message to policymakers: tax us, but do not slow us down. Whether that balance is achievable remains the central political question of the AI decade.
The internal alignment question is equally urgent. OpenAI’s paper reflects one specific view of what AI development should optimize for: wealth generation with broad-based distribution. But that is one framework among many. Some investors prioritize shareholder returns above redistribution. Some founders believe markets, not government, should allocate AI’s benefits. Some researchers worry that focusing on macro-economic policy lets companies avoid hard questions about AI safety. As we noted in our analysis of the $150 billion IPO race, the distinction between building wealth and distributing it is becoming the defining strategic tension of the AI industry. Know which side your organization is on, and whether your incentive structures match.
Here is an operator checklist for the next 12 months:
- Audit your automation footprint. Count the roles your AI products replace or reduce. If you cannot quantify displacement, you cannot manage it — and regulators will quantify it for you.
- Model robot tax scenarios. If a 2–5 percent tax on AI-automated labor becomes law, what does it do to your unit economics? Build the spreadsheet now.
- Prototype portable benefits. If employment decouples from benefits, your workforce strategy changes fundamentally. Start designing for a world where your best engineers work 32-hour weeks and expect healthcare from a public fund, not your HR department.
- Build credibility on safety. The credibility gap in OpenAI’s proposal — that it comes from a company avoiding real oversight — is its core vulnerability. If your organization can demonstrate genuine commitment to transparency, worker transition, and safety governance, you build political capital that makes regulation less adversarial when it arrives.
- Quantify the distributional impact of your systems. OpenAI claims AI will generate trillions in aggregate value. But value distributed is not value generated — it is value extracted. If your product replaces ten workers, what happens to those ten people? Building these metrics into product development forces clarity on who wins and who loses from your technology, and generates the data you need when regulators arrive asking precisely these questions.
The original insight of OpenAI’s paper is correct: artificial intelligence will generate enormous wealth, and that wealth will concentrate unless institutions intervene. The scale is not in question. When a single quarter of venture funding exceeds $300 billion and four companies capture two-thirds of it, concentration is not a forecast — it is the present tense. When one company’s revenue triples in two years while simultaneously proposing to redesign the tax code, the conflict of interest is not theoretical. It is the defining feature of the moment.
The open question is whether the companies proposing these interventions — and positioned to benefit most from them — can be trusted to design frameworks that genuinely work. The answer depends on how seriously they take the frameworks they propose, measured not by the elegance of their white papers but by what they actually build. If OpenAI’s next quarterly report shows investments in worker transition programs, public research partnerships, and genuine safety governance, the paper may prove to be the starting gun for meaningful reform. If the next report shows only revenue growth and headcount in lobbying, the paper was exactly what the critics said it was: strategy dressed in the language of solidarity.
In other news
Google releases Gemma 4 open-weights models — Google DeepMind released Gemma 4 on April 2, four open-weight models built on Gemini 3 research under the Apache 2.0 license. The flagship 31B model ranks third on Arena AI’s text leaderboard at 1452 Elo, with AIME 2026 math scores jumping from 20.8 percent to 89.2 percent over its predecessor.
Anthropic signs 3.5-gigawatt TPU deal with Google and Broadcom — Anthropic secured a deal with Google and Broadcom for 3.5 gigawatts of next-generation TPU capacity, expanding from the 1 GW already being delivered in 2026. The company’s run-rate revenue hit $30 billion, up from $9 billion at the end of 2025.
Q1 2026 shatters global venture funding records — Investors poured $300 billion into 6,000 startups in the first quarter, with AI companies capturing $242 billion — 80 percent of total global venture capital. The four largest rounds alone (OpenAI, Anthropic, xAI, Waymo) accounted for $188 billion.
NVIDIA highlights physical AI advances during National Robotics Week — NVIDIA showcased partnerships in field robotics and warehouse automation at National Robotics Week, with companies like Aigen deploying solar-driven rovers using computer vision to remove weeds and reduce herbicide use.