skip to content
Stephen Van Tran
Table of Contents

The most consequential battle over artificial intelligence is not happening inside a research lab or a corporate boardroom. It is happening in congressional primaries in New York, North Carolina, and Texas, where dueling super PACs are spending nine figures to determine whether the next Congress will regulate AI or let the industry write its own rules. Leading the Future, backed by OpenAI president Greg Brockman and Andreessen Horowitz, has assembled a $125 million war chest to elect candidates who oppose new AI restrictions. On the other side, Public First Action — anchored by a $20 million donation from Anthropic — is backing pro-regulation candidates from both parties. Combined with affiliated committees and state-level operations, total AI-linked political spending in this cycle is on pace to exceed $185 million before a single general-election ballot is cast.

The stakes are not abstract. Congress has introduced hundreds of AI-related bills across the 118th and 119th sessions yet enacted fewer than 30. California’s sweeping SB 1047, which would have imposed safety requirements on frontier models, passed both chambers only to be vetoed by Governor Newsom in 2024. The regulatory vacuum means that whoever controls the next Congress controls whether the United States gets a federal AI framework — or whether the industry continues to operate in a patchwork of voluntary commitments and executive orders. With OpenAI now valued at $730 billion and Anthropic at $380 billion, the companies bankrolling these campaigns have hundreds of billions of reasons to care about the outcome.

The Fairshake playbook, reloaded with a bigger budget

The AI super PAC offensive did not appear from nothing. Its architects studied the crypto industry’s 2024 election strategy with forensic precision and decided to replicate it at larger scale. Fairshake, the crypto-funded super PAC, was the single largest corporate donor of the 2024 election cycle, spending more than $40 million to support incumbents on key financial committees and backing over 50 candidates who won their races. The playbook was elegant and ruthless: never mention crypto in your ads, pick races where outside money can be decisive, and attack candidates early enough to define them before they define themselves.

Leading the Future has borrowed that template almost line for line. The super PAC’s co-strategist previously advised Fairshake during its 2024 campaign, and the structural parallels are unmistakable. Leading the Future operates both federal and state PACs alongside 501(c)(4) nonprofit arms that can run issue advocacy without disclosing donors. Its first targets are not general-election opponents but primary candidates from within the party — a strategy designed to eliminate regulation-friendly lawmakers before voters in safe districts ever get a choice. Brockman and his wife each contributed $12.5 million personally, while Marc Andreessen and Ben Horowitz each matched that amount. Palantir co-founder Joe Lonsdale and AI search company Perplexity round out the donor rolls, pushing the total to more than $100 million in initial commitments with $70 million in cash on hand.

The advertising itself tells the story of the strategy. According to NBC News, the super PAC’s ads contain no reference to artificial intelligence whatsoever. Instead, the groups lean into red-meat messaging on immigration, crime, or progressive priorities depending on the district — anything to move voters without revealing the actual policy reason these groups are taking sides. It is an approach that treats AI regulation as too abstract for campaign ads but too important to leave to democratic debate. The implicit bet is that most voters do not yet have strong opinions on AI policy, so the cheapest path to victory is to win on other issues entirely and install friendly legislators who will let the industry self-govern once they reach Washington.

The counter-offensive looks different in both scale and philosophy. Public First Action has pledged $50 million to support pro-regulation candidates from both parties, with Anthropic’s $20 million representing the single largest donation from any AI company to a political cause. The group has already launched six-figure ad buys backing pro-regulation Republicans including Marsha Blackburn of Tennessee and Pete Ricketts of Nebraska, signaling that it views AI oversight as a genuinely bipartisan cause rather than a progressive crusade. Public First Action’s stated priorities include giving the public more visibility into AI companies, opposing federal preemption of state-level AI regulation without a strong replacement standard, maintaining export controls on AI chips, and regulating high-risk applications such as AI-enabled biological weapons research. Where Leading the Future is modeled on the crypto industry’s regulatory escape, Public First Action is betting that the companies willing to accept guardrails now will earn the public trust needed to survive the inevitable backlash later.

The spending war creates a fascinating split within the AI industry itself. OpenAI — the company that once published its research openly in the name of safety — is now the largest single-company backer of the anti-regulation side. Anthropic — founded by former OpenAI researchers who left precisely because they wanted stronger safety practices and already navigating its own fraught relationship with the Pentagon — is the largest backer of the pro-regulation side. The political battlefield is externalizing a philosophical rift that has simmered inside AI research since at least 2020, when the debate over scaling risks first divided the field into camps that prioritized speed and camps that prioritized caution. Now that divide has a price tag, and it is denominated in nine figures.

Follow the money to a Manhattan congressional seat

The proxy war has its most vivid expression in New York’s 12th Congressional District, where Alex Bores — a former Palantir engineer turned state legislator — is running for Congress on an AI regulation platform. Bores successfully pushed New York’s RAISE Act, one of the first state-level AI safety laws, through the legislature. Now he wants to bring that approach to the federal level, and the AI industry has responded by making him the most expensive primary target in the country.

Leading the Future has already spent more than $1 million on advertising attacking Bores in a deep-blue, open district where roughly a dozen Democrats are competing for the nomination. The ads do not mention AI. They attack Bores on local issues, attempting to define him as a flawed candidate before most voters learn that the money behind those attacks flows directly from the companies his legislation would constrain. It is a strategy that bets on information asymmetry — the assumption that primary voters in Manhattan will not trace the funding behind a glossy mailer back to Greg Brockman’s checkbook.

The Bores race matters beyond one congressional seat because it is establishing the market price for defeating an AI-skeptical candidate. If $1 million in primary spending can knock out a former tech executive with legislative accomplishments and bipartisan credibility, every state legislator in the country will take notice. The chilling effect on future AI regulation efforts could dwarf the direct impact of any single election. Conversely, if Bores survives the onslaught and wins, it proves that AI money has limits — the same lesson the crypto industry learned in a handful of expensive 2024 losses that dented Fairshake’s aura of invincibility.

Public First Action has responded by backing Bores through affiliated committees including Jobs and Democracy and Defending Our Values, which reported $2.8 million in independent expenditures supporting candidates in New York, North Carolina, and Texas within their first two weeks of operation. The speed of deployment suggests that both sides anticipated this escalation and arrived with battle plans ready. The result is a kind of arms race where each dollar spent by one side triggers a matching response from the other, and the ultimate winner may simply be whichever operation runs out of money last.

The pattern extends well beyond New York. In Texas, Leading the Future has backed Chris Gober, a Republican candidate who has signaled openness to federal preemption of state AI rules — a position that would effectively nullify the patchwork of state laws that currently represents the only binding AI regulation in the United States. In North Carolina, both super PAC networks are active in multiple House primaries, turning a traditionally low-attention election cycle into a laboratory for testing whether AI money can reliably manufacture outcomes across different electorates, geographies, and party dynamics. Each race generates data that both sides will use to optimize their targeting for the general election and the cycles beyond.

Stitching together the FEC filings and announced commitments from both sides, AI-linked political spending in the 2026 cycle already totals roughly $175 million to $185 million in pledged and deployed capital — and that figure does not include the crypto super PAC Fairshake, which entered 2026 with nearly $200 million of its own and frequently backs the same innovation-friendly candidates. If you count the overlap between crypto and AI donor networks — Andreessen Horowitz alone donated $23 million to Fairshake for its 2026 operations — the tech industry’s total midterm war chest likely exceeds $400 million, making technology the single largest outside spending category in American politics for the first time in history.

The ways this $185 million bet could backfire

The most obvious risk for the AI industry’s political offensive is that it works too well. If Leading the Future successfully clears the field of regulation-minded candidates, the resulting Congress will face an AI governance vacuum at precisely the moment public concern is accelerating. Polls consistently show that a majority of Americans want more AI oversight, not less. A Congress that appears captured by industry money will struggle to maintain legitimacy on AI issues, and the backlash could produce far harsher regulation when it eventually arrives — not from the federal government but from state attorneys general, European regulators, and class-action plaintiffs who see an unregulated industry as a target-rich environment.

The crypto parallel is instructive here, and not in the way Leading the Future’s strategists might hope. Fairshake’s 2024 victories did install crypto-friendly lawmakers, but the resulting Congress has not produced the comprehensive crypto legislation the industry wanted. Instead, the spectacle of industry-funded candidates refusing to engage on crypto policy created a voter education moment that made regulation more popular, not less. Fairshake’s 2026 war chest of $200 million suggests the industry itself recognizes that buying elections is a recurring expense, not a one-time investment — a treadmill where you have to keep spending just to maintain the status quo.

There is also the reputational risk for the companies themselves. OpenAI’s Greg Brockman simultaneously leads a company that publicly commits to beneficial AI development and personally funds a super PAC designed to prevent oversight of that development. The contradiction is difficult to paper over, particularly when Anthropic is publicly funding the opposing side and framing its donation as an investment in the safety ecosystem. The AI industry’s internal split mirrors a genuine philosophical divide about whether regulation is a threat to innovation or a prerequisite for public trust, and the super PAC war is forcing companies to choose sides in ways that will define their brands for years. Every enterprise customer evaluating an AI vendor will now factor political positioning into the procurement decision — a dynamic that transforms campaign donations into competitive intelligence.

The international dimension adds another layer of complexity. Leading the Future’s stated mission emphasizes winning the AI race against China, framing regulation as a drag on American competitiveness. But the argument has a structural weakness: China’s own AI ecosystem is flourishing under a regulatory regime that is, in many respects, stricter than anything proposed in the United States. Beijing requires algorithm registration, mandates content filtering, and imposes data-handling rules that would make most American AI companies shudder. The notion that regulation automatically cedes ground to China collapses under scrutiny when the competitor you are supposedly racing already regulates more aggressively than you do.

Meta has entered the fray from yet another angle, launching the American Technology Excellence Project to elect AI-friendly state-level candidates and signaling willingness to spend tens of millions on state races. The addition of state-level spending opens a second front that pro-regulation forces may not have the resources to contest. Public First Action’s $50 million looks formidable until you map it against Leading the Future’s $125 million, Meta’s uncapped state operation, and Fairshake’s $200 million in adjacent crypto spending. The asymmetry is structural: companies with hundreds of billions in market capitalization can treat political spending as a rounding error, while safety-focused organizations must justify every dollar against their primary research missions.

The early results offer cold comfort for regulation advocates. Of the 20 candidates in the Texas and North Carolina primaries who received AI-industry funding, only one lost her race. That 95-percent win rate, while drawn from a small and self-selected sample, establishes a proof of concept that will attract even more industry money in subsequent cycles. The question is no longer whether AI companies will spend to shape regulation but whether any countervailing force — voters, civil society, rival companies — can match their resources and discipline.

What the next twelve months look like, and what to watch for

The AI super PAC war is entering its most intense phase as primary calendars accelerate into summer and fall. Several dynamics will determine whether this spending wave reshapes Congress or merely enriches political consultants.

First, watch the New York 12th District primary as a bellwether. If Alex Bores wins despite the industry’s million-dollar opposition campaign, it signals that AI money has the same ceiling that pharma and tobacco money eventually hit — a point where voters punish candidates for their backers rather than rewarding them. If Bores loses, expect a cascade of withdrawal from AI policy work by state legislators who conclude the personal cost is too high.

Second, track whether the advertising strategy evolves. Right now, AI super PACs are winning by not talking about AI. That works in a low-information primary but becomes riskier in general elections where opponents can tie candidates to their donors. The first campaign that successfully makes AI funding a liability — the way “Big Oil money” became shorthand for climate obstruction — will change the calculus for every race that follows.

Third, monitor the defense authorization bill moving through Congress, which contains several AI governance provisions including requirements for the Department of Defense to establish frameworks for assessing, governing, and approving AI model deployment. Defense bills often serve as vehicles for regulation that cannot pass on its own merits, and the provisions being embedded now could become the de facto federal AI framework by default — regardless of what the super PACs’ preferred candidates want.

Fourth, pay attention to state-level action. With federal legislation stalled and SB 1047 vetoed, states are passing their own AI laws at an accelerating pace. At least a dozen new state AI statutes took effect on January 1, 2026, covering everything from algorithmic discrimination to deepfake disclosure. Leading the Future’s push for federal preemption of state rules would wipe these laws off the books, making the preemption question possibly the highest-stakes policy outcome of the entire election cycle.

For operators and builders, the practical checklist is straightforward but urgent. First, audit your compliance posture against both federal proposals and the strictest state laws, because the regulatory floor is rising regardless of which super PAC wins. The dozen-plus state statutes that took effect on January 1, 2026 cover algorithmic discrimination, deepfake disclosure, and automated decision-making transparency — and more are coming. Second, budget for compliance as a recurring line item, not a one-time project. The companies that treat governance as infrastructure rather than overhead will have a structural advantage when regulation does arrive, whether that takes twelve months or thirty-six. Third, engage with the policy process directly rather than relying on industry groups to represent your interests, because the companies funding these super PACs are optimizing for their own regulatory preferences, not yours. A startup building AI safety tools has fundamentally different interests from OpenAI or Meta, and no super PAC represents that nuance. Fourth, watch the FEC filings quarterly — in 2026, the money trail is the most reliable leading indicator of where AI policy is actually headed. When Leading the Future opens a new state-level PAC, that state is about to become a battleground. When Public First Action buys ads in a district, expect regulatory momentum to follow.

The broader lesson of this moment is that AI governance has graduated from a policy debate to a power struggle, and power struggles are won by the side that shows up with resources, strategy, and stamina. The $185 million already committed is not the ceiling — it is the opening bid. By November 2026, the total could easily double as general-election spending kicks in and both sides escalate. The question that every participant in the AI economy should be asking is not whether this spending will shape regulation, but what kind of regulatory system emerges when the industry that needs oversight is also the industry that funds the campaigns of its overseers.

In other news

Frore Systems hits unicorn status with $143M raise — The AI chip cooling startup reached a $1.64 billion valuation in a Series D round led by MVP Ventures, with participation from Fidelity, Qualcomm Ventures, and Mayfield Fund. The funding will scale its LiquidJet and AirJet thermal platforms as data center power density surges (PR Newswire).

xAI ships Grok 4.20 Beta 2 with reduced hallucinations — The March 3 update to xAI’s flagship model introduces improved instruction following and a four-agent parallel processing architecture, explicitly prioritizing production reliability over raw benchmark scores (xAI Release Notes).

MiniMax surpasses Baidu in market cap on M2.5 momentum — The Shanghai-based AI lab’s stock surged 22 percent on March 10, pushing its Hong Kong-listed market capitalization to 382.6 billion HKD after its M2.5 model captured 18.7 trillion tokens of weekly volume at roughly one-twentieth the cost of competing frontier models (South China Morning Post).

NVIDIA GTC 2026 opens with Rubin platform reveal — Jensen Huang’s Monday keynote at the SAP Center unveiled the full Rubin GPU lineup alongside the Vera CPU and a Groq-derived inference chip, with Rubin-based cloud instances expected from AWS, Google Cloud, and Microsoft in the second half of 2026 (NVIDIA Newsroom).

Defense authorization bill embeds AI governance mandates — The fiscal year 2026 defense bill advancing through Congress includes provisions requiring the Department of Defense to establish a formal framework for assessing, governing, and approving AI model deployment across military applications (Akin).