skip to content
Stephen Van Tran
Table of Contents

The White House dropped a four-page document on Friday that could reshape the entire American AI industry more profoundly than any model release, any funding round, or any chip announcement this year. The national AI legislative framework lays out six guiding principles for Congress and makes one demand unmistakably clear: pass a single federal standard that preempts the patchwork of state AI laws spreading across the country, and do it this year. White House AI czar David Sacks and Chief Technology Officer Michael Kratsios delivered the message in tandem, framing the choice as binary — unified national rules that let America win the AI race, or a fragmented regulatory landscape that hands the advantage to China and the European Union. The document arrives two days after Senator Marsha Blackburn released a nearly 300-page discussion draft of the TRUMP AMERICA AI Act, a sweeping bill that would sunset Section 230, impose a duty of care on AI developers, codify copyright protections against AI training, and preempt state regulations that conflict with the new federal standard. Together, the two moves represent the most aggressive attempt in American history to centralize control over how artificial intelligence is built, deployed, and governed.

The timing is deliberate. State legislatures across the country are in session, and they are not waiting for Washington. Lawmakers in 45 states have introduced 1,561 AI-related bills in 2026 alone, covering everything from algorithmic bias audits to deepfake disclosures to chatbot safety requirements for minors. California enacted 24 AI-related laws across the 2024 and 2025 legislative sessions, including SB 53, the frontier AI transparency law that Governor Newsom signed after vetoing its more aggressive predecessor, SB 1047. Colorado, Utah, and Texas have all passed broad AI governance statutes. And a bipartisan coalition of 36 state attorneys general has already told Congress, in no uncertain terms, that they will fight any blanket federal preemption of their authority to regulate AI within their borders. The stage is set for the most consequential federalism battle in technology policy since the early internet era — and the outcome will determine whether AI governance in America looks more like the relatively permissive federal telecommunications framework or the messy, state-driven patchwork that governs data privacy today.

The four-page blueprint that could swallow fifty statehouses

The White House framework is short by design. At just four pages, it is not legislation. It is a set of instructions to Congress, a legislative blueprint that signals the administration’s priorities and dares Capitol Hill to act. The six guiding principles cover protecting children and empowering parents, preventing electricity costs from surging as data centers proliferate, respecting intellectual property rights, preventing censorship and protecting free speech, enabling innovation and ensuring American AI dominance, and educating Americans to build an AI-ready workforce. Each principle is framed in language calibrated to appeal to the broadest possible coalition — child safety for the social conservatives, energy affordability for the populists, IP protection for the creative industries, anti-censorship for the free-speech wing, and innovation for the business lobby.

The preemption language is the document’s center of gravity. The framework states that Congress should preempt state AI laws that impose undue burdens, establishing a minimally burdensome national standard rather than fifty discordant ones. That single sentence is a grenade tossed into the middle of every statehouse AI debate in the country. It builds on President Trump’s December 11, 2025 executive order titled “Ensuring a National Policy Framework for Artificial Intelligence,” which directed the Department of Justice to establish an AI Litigation Task Force specifically charged with challenging state AI laws in federal court on grounds that they unconstitutionally burden interstate commerce or are preempted by federal regulations. The executive order also conditioned access to federal grant funding on states’ willingness to avoid enacting what the administration considers overly burdensome AI laws — a financial pressure tactic that goes beyond mere persuasion.

The framework does include carve-outs. Federal preemption would not extend to state laws governing children’s safety, AI compute and data center infrastructure other than generally applicable permitting reforms, or state government procurement and use of AI. These exceptions are designed to deflect the most politically potent criticism — that the White House is stripping states of the ability to protect children from AI harms. But the exceptions may be too narrow to satisfy the states that have already invested years of legislative effort in building comprehensive AI governance regimes. California’s SB 53, for instance, imposes transparency requirements on frontier model developers that extend well beyond child safety. Colorado’s AI Act mandates bias audits for high-risk systems used in employment, lending, and insurance. Whether these laws survive federal preemption depends entirely on how broadly Congress interprets the phrase “undue burdens” — a definition that the White House framework conspicuously leaves to lawmakers.

The framework also calls for less stringent permitting requirements so that data centers can generate power on site, a provision that directly addresses the energy bottleneck constraining AI infrastructure buildout. As big tech committed roughly $690 billion to AI infrastructure in 2025-2026, the power demands of new GPU clusters have collided with aging electrical grids and local permitting processes that were never designed for facilities consuming hundreds of megawatts. The White House is betting that streamlining these permits at the federal level will accelerate AI deployment while simultaneously reducing the regulatory surface area available to states.

A 300-page bill with a loaded gun in every chapter

Senator Blackburn’s TRUMP AMERICA AI Act is where the framework’s principles become concrete policy — and where the political fights get truly vicious. The nearly 300-page discussion draft combines several previously proposed measures into a single omnibus package that touches nearly every corner of the digital economy. Its most explosive provision is the sunset of Section 230 of the Communications Decency Act, the 1996 law that shields online platforms from liability for user-generated content. Section 230 has been the legal bedrock of the modern internet, enabling everything from social media to marketplace platforms to operate without facing lawsuits for every piece of content posted by their users. Sunsetting it does not merely regulate AI. It fundamentally restructures the liability framework for every company that hosts user content online.

The bill imposes a duty of care on AI developers, requiring them to exercise reasonable care in the design, development, and operation of AI platforms to prevent and mitigate reasonably foreseeable harms to users. The language echoes product liability standards from the physical world — the same legal frameworks that hold automakers responsible for defective brakes and pharmaceutical companies accountable for undisclosed side effects. Applied to AI, the duty of care would create a new legal obligation for companies like OpenAI, Anthropic, Google, Meta, and every startup building on their models to anticipate how their systems might cause harm and take affirmative steps to prevent it. The bill requires providers of high-risk AI systems to submit to third-party audits specifically designed to detect viewpoint or political affiliation discrimination — a provision that reflects the conservative movement’s conviction that AI systems systematically suppress right-leaning perspectives.

The draft also integrates the NO FAKES Act, giving individuals the right to control the use of their digital likeness, and includes copyright language stipulating that the unauthorized use of copyrighted works by AI developers to train their models does not constitute fair use under the Copyright Act. That single sentence could upend the business model of every major AI lab. OpenAI, Anthropic, Google, and Meta have all trained their foundation models on vast corpora of copyrighted text, images, code, and audio scraped from the open internet. If Blackburn’s copyright provision becomes law, every one of those companies would face retroactive liability for training data they have already ingested — a financial exposure that could run into the tens of billions of dollars. The bill also incorporates the Kids Online Safety Act provisions that Blackburn has championed for years, requiring platforms to give parents controls over accounts and devices and mandating features to combat potential sexual exploitation or self-harm among minors.

The preemption mechanism in Blackburn’s draft is both broader and more nuanced than the White House framework suggests. The bill would establish a federal floor — a minimum standard that all AI developers must meet — while simultaneously capping state authority to go further in areas covered by the federal regime. States could still regulate children’s safety and their own government procurement of AI, but they could not impose additional disclosure requirements, bias audit mandates, or liability standards that exceed the federal baseline. For companies currently navigating the compliance maze of Colorado’s bias audits, California’s transparency requirements, and Texas’s governance mandates, the appeal is obvious. One set of rules, one compliance team, one legal strategy. But for state legislators who spent years crafting those laws in response to constituent harms that Washington ignored, the bill reads like a hostile takeover.

The bill’s critics are already sharpening their knives. The Center for Data Innovation published an analysis calling the discussion draft “not a serious starting point” for a federal AI framework, arguing that it over-regulates in ways that would cripple American competitiveness. Silicon Valley’s reaction has been predictably split: the copyright and Section 230 provisions alarm the major labs, while the federal preemption of state laws is exactly what their lobbyists have been spending billions to achieve. The bill forces every stakeholder to make uncomfortable trade-offs — accept federal copyright liability in exchange for regulatory certainty, or fight the bill and risk the continued expansion of state-level regulation that is already creating compliance nightmares for companies operating across multiple jurisdictions.

The $1.1 billion machine fighting for one set of rules

Behind the legislative maneuvering lies a lobbying operation of staggering scale. A comprehensive analysis by Public Citizen found that Big Tech executives, corporations, and investors spent at least $1.1 billion during the 2024 election cycle and throughout 2025 to shape the regulatory landscape — a figure that encompasses direct lobbying expenditures, campaign contributions, and donations to super PACs. One in four federal lobbyists now reports working on artificial intelligence issues. Microsoft employed 63 AI lobbyists in 2025. Meta spent a record $19.7 million on federal lobbying in the first nine months of the year alone and launched a new super PAC called the American Technology Excellence Project with tens of millions earmarked to support tech-friendly candidates in state elections and oppose emerging state AI regulation. OpenAI president Greg Brockman and Andreessen Horowitz put $100 million into a PAC called Leading the Future targeting the same goal. The AI industry is not merely lobbying for favorable regulation. It is building a political infrastructure designed to ensure that the rules governing its products are written in Washington, not Sacramento or Austin or Denver.

The opposition is formidable but outgunned financially. The 36 state attorneys general who signed the bipartisan letter opposing federal preemption argue that broad preemption would undermine states’ ability to respond quickly and effectively to emerging AI risks. More than 50 Republican state legislators from 22 states sent their own letter to the White House in early March, pushing back against federal pressure on Utah and other states pursuing AI legislation. The Republican letter is particularly significant because it exposes a fissure within the president’s own party. These are not Democratic regulators fighting a deregulatory administration. They are conservative state lawmakers invoking states’ rights and federalism principles against a Republican White House — an ideological tension that cannot be easily resolved.

Stitching together the lobbying data yields a proprietary estimate worth pausing on. If the AI industry’s $1.1 billion in political spending across 2024-2025 succeeded in delivering a federal preemption regime that eliminates compliance with divergent state laws, the return on investment would be extraordinary. Companies operating in all 50 states currently face potential compliance costs estimated in the hundreds of millions annually as they navigate conflicting definitions of high-risk AI systems, different disclosure requirements, and varying audit mandates. A single federal standard — even one with a duty of care and copyright provisions — would collapse those costs to a single compliance framework, potentially saving the industry $500 million to $1 billion per year in legal, engineering, and administrative overhead. That makes the $1.1 billion in political spending look less like influence peddling and more like a cold-blooded infrastructure investment with a two-year payback period.

The EU comparison sharpens the stakes. The EU AI Act enters full enforcement in August 2026, imposing a risk-based classification system with binding obligations for high-risk AI systems across all 27 member states. The European Commission’s AI Office is already issuing formal document retention orders and launching investigations. The Brussels Effect — the phenomenon where EU regulations become de facto global standards because multinational companies find it easier to maintain a single compliance framework — is already influencing how American AI companies design their products. If the United States does not pass a comprehensive federal framework, the irony would be devastating: American AI companies would end up governed not by American law but by European regulation adopted voluntarily to simplify global compliance. That argument — sovereignty through legislation — is perhaps the most potent weapon in the preemption advocates’ arsenal.

The road from here runs straight through a minefield

The White House framework and Blackburn’s bill face a gauntlet of obstacles that could prevent either from becoming law in 2026. The Section 230 sunset is a poison pill for many Democrats and some Republicans who worry about its cascading effects on the broader internet. The copyright provision will trigger an all-out war between the AI labs and the creative industries, with each side capable of deploying enormous lobbying resources. The preemption language will face constitutional challenges from states invoking their police power to protect citizens from AI harms — challenges that could reach the Supreme Court and take years to resolve. And the duty of care provision raises fundamental questions about how courts would assess what constitutes a reasonably foreseeable harm from a large language model that can produce billions of distinct outputs.

The political calendar compounds the difficulty. The 2026 midterm elections are eight months away, and every member of the House and a third of the Senate are facing voters. Voting to preempt state AI laws exposes incumbents to attack from both directions — from the left, for enabling Big Tech to avoid accountability, and from the right, for trampling states’ rights and conservative federalism principles. The more than 50 Republican state legislators who wrote to the White House are a preview of the political pain that federal preemption would inflict on Republican candidates in state-level races. Blackburn’s bill may be the opening bid, but the final legislation — if any passes — will look substantially different after months of committee markup, floor amendments, and backroom negotiations.

Meanwhile, the AI industry is not standing still. OpenAI has surpassed $25 billion in annualized revenue and is reportedly taking early steps toward a public listing. Anthropic is approaching $19 billion in annualized revenue while simultaneously suing the Pentagon over a supply chain risk designation in a case that has drawn support from nearly 150 retired federal judges and employees from rival companies including OpenAI and Google. The models are getting more powerful, the deployments are accelerating, and the revenue is scaling at a pace that makes the regulatory debate feel increasingly urgent. Every month that passes without a clear regulatory framework is a month in which AI systems become more deeply embedded in hiring decisions, lending algorithms, healthcare diagnostics, and educational tools — creating facts on the ground that will be exponentially harder to regulate retroactively.

There is also the question of enforcement capacity. Even if Congress passes a comprehensive AI bill, the federal government has no existing agency with the technical expertise, staffing, or budget to regulate AI systems at the scale the bill envisions. The Federal Trade Commission, the most likely candidate for primary enforcement authority, has approximately 1,100 full-time employees and a budget that has barely kept pace with inflation. By contrast, the EU’s AI Office was purpose-built for AI enforcement and has already begun hiring specialists in machine learning, risk assessment, and algorithmic auditing. Creating an equivalent American apparatus from scratch would take years and require congressional appropriations that a divided government may be unable to deliver. The enforcement gap is not a minor implementation detail. It is a structural flaw that could render any federal AI framework toothless on arrival.

The most likely near-term outcome is legislative gridlock followed by escalating state action. If Congress cannot pass a comprehensive AI bill before the midterm elections, the 1,561 state bills currently in play will continue advancing, and the DOJ’s AI Litigation Task Force will begin filing federal lawsuits against the most aggressive state laws. The result would be a years-long legal and political battle fought simultaneously in Congress, in federal courts, and in statehouses across the country — a three-front war that creates exactly the kind of regulatory uncertainty that the White House framework was designed to prevent. For AI operators, the practical takeaway is sobering: build compliance systems that can handle multiple overlapping regulatory regimes, because no matter who wins the federalism fight, the transition will be long and the rules will keep changing.

  • Map your state exposure now. Audit which of the 1,561 pending state AI bills could apply to your products, especially in California, Colorado, Texas, Utah, and Illinois. Build modular compliance architectures that can accommodate both state-level and eventual federal requirements without costly rearchitecture.
  • Treat the duty of care as inevitable. Whether it comes from Blackburn’s bill or state tort law, the legal obligation to anticipate and mitigate reasonably foreseeable AI harms is converging from multiple directions. Invest in red-teaming, safety testing, and incident documentation now, before it becomes a legal mandate.
  • Prepare for copyright reckoning. If Blackburn’s copyright provision survives in any form, every company that trained models on copyrighted data will need a legal strategy. Begin cataloging training data provenance, negotiating retroactive licenses where feasible, and building synthetic data pipelines as a hedge.
  • Watch the EU August deadline. The EU AI Act’s full enforcement in August 2026 will establish the first comprehensive high-risk AI compliance benchmark globally. Companies that achieve EU compliance will have a structural advantage if the US eventually adopts similar risk-based categories.

In other news

Micron smashes revenue records but investors blink at the bill — Micron Technology reported fiscal Q2 revenue of $23.9 billion, nearly tripling year-over-year, with adjusted earnings of $12.20 per share crushing the $9.31 consensus estimate. Shares still slipped after the company revealed plans to spend $25 billion in capital expenditure during fiscal 2026, extending the pattern of AI memory companies posting record profits while spooked investors question the sustainability of the spending cycle.

Anthropic’s Pentagon fight draws 150 retired judges — Nearly 150 retired federal and state judges filed an amicus brief supporting Anthropic in its lawsuit against the Trump administration over the unprecedented supply chain risk designation. The Pentagon doubled down this week, citing concerns about Anthropic’s foreign workforce including employees from the People’s Republic of China, escalating a dispute that could set precedent for how the government uses national security designations against domestic AI companies.

Google launches Gemini 3.1 Flash-Lite for the cost-conscious — Google introduced Gemini 3.1 Flash-Lite, an efficiency-focused model delivering 2.5x faster response times and 45 percent faster output generation compared to earlier versions, priced at just $0.25 per million input tokens. The release underscores the industry’s pivot from raw capability to cost efficiency as enterprises demand AI that scales without breaking budgets.

OpenAI crosses $25 billion in annualized revenue — OpenAI has surpassed $25 billion in annualized revenue and is reportedly exploring early steps toward a public listing as soon as late 2026, while rival Anthropic approaches $19 billion in annualized revenue. The gap between the two leaders continues to narrow as enterprise adoption accelerates across both platforms.

Linux Foundation banks $12.5 million for open-source AI security — The Linux Foundation received $12.5 million in grant funding from Anthropic, AWS, GitHub, Google, Google DeepMind, Microsoft, and OpenAI to strengthen open-source security infrastructure, a rare moment of industry-wide cooperation on a shared vulnerability that underpins the majority of production AI systems.