Table of Contents
Jensen Huang stood before the cameras at the Center for Strategic and International Studies in December 2025 and drew a diagram that has since become the rosetta stone for understanding AI competition. “Think of AI as a five-layer cake,” he said. “Energy at the bottom. Then infrastructure—data centers. Then chips. Then models. Then applications.” He paused, letting the implications settle. “China has twice the amount of energy we have as a nation. They can build a hospital in a weekend. If you want to build a data center here in the United States, from breaking ground to standing up an AI supercomputer is probably about three years.” (CSIS conversation with Jensen Huang)
This five-layer framework reshapes how we should evaluate the AI race between the United States and China. Most commentary fixates on the top layers—who has the best models, who trains the smartest chatbots, who writes the most impressive code. But Huang’s insight cuts deeper: those glamorous top layers rest on foundations that neither nation fully controls. America dominates chips and models. China dominates energy and build velocity. Infrastructure sits contested. And applications—the layer where AI actually transforms economies—increasingly favor whoever can deploy fastest at scale.
The uncomfortable truth emerging from this analysis is that the United States holds significant advantages in some layers while facing structural deficits in others. China mirrors this pattern in reverse. Neither nation can simply outspend or out-engineer the other across all five layers. The winner—if there is one—will be whichever country can shore up its weaknesses faster than the other can exploit them. The race is not a sprint toward some finish line called “artificial general intelligence.” It’s a grinding infrastructure contest that will play out over decades, measured in terawatt-hours and construction permits as much as in benchmark scores.
The foundation nobody wants to talk about
Energy sits at the bottom of Huang’s stack because nothing else functions without it. Training a frontier AI model consumes electricity equivalent to powering tens of thousands of homes for a year. Running inference at scale—every ChatGPT query, every Midjourney image—draws continuous power from data centers that never sleep. The International Energy Agency projects that data centers alone could triple their share of U.S. electricity consumption to 12% by 2028, driven almost entirely by AI workloads. (IEA report on data centers)
Here is where China’s advantages become impossible to ignore. China generated approximately 9,500 terawatt-hours of electricity in 2024, compared to roughly 4,200 TWh in the United States—more than double the output. (U.S. Energy Information Administration) That gap isn’t closing. China added more new power generation capacity in 2024 alone than the United States has added in the past decade combined, including solar, wind, nuclear, and coal plants. The country commissioned 250 gigawatts of new capacity last year while America struggled to bring 25 GW online.
Chris Miller, author of “Chip War” and one of the sharpest analysts of semiconductor geopolitics, testified before the Senate that AI leadership rests on “computing power, brain power, and electrical power.” He noted that “America has a substantial lead in computing power, but China leads in electrical power.” (CNBC) The implication is stark: chip superiority becomes a stranded asset if you cannot power the data centers that house those chips.
China’s grid advantages extend beyond raw generation. The country operates with reserve margins of 80-100% in many regions—meaning the grid can absorb massive new loads without brownouts. American grids often run with 15-20% reserves, and in some regions, utilities have begun refusing new data center connections entirely. Microsoft, Amazon, and Google have all reported delays in data center projects due to power availability constraints. Oracle’s Larry Ellison recently disclosed that the company had secured “county approval” for building a gigawatt-scale data center but still faced a multi-year timeline to bring sufficient power online.
The nuclear question looms over both nations. America pioneered commercial nuclear energy but has spent decades decommissioning plants rather than building them. The country currently operates 93 reactors generating roughly 775 TWh annually—the same fleet, essentially, that existed in 2000. China operates 55 reactors and has 23 more under construction, with plans to triple nuclear capacity by 2035. (World Nuclear Association) The lead times for nuclear plants span decades, meaning decisions made today shape energy availability in 2040.
Big Tech has begun scrambling for alternatives. Microsoft signed a deal to restart Three Mile Island’s undamaged reactor specifically to power AI data centers. Google announced plans to purchase power from small modular reactors still in development. Amazon acquired a nuclear-powered data center campus in Pennsylvania. These moves acknowledge the problem without solving it: contracted power from existing or near-term nuclear sources cannot scale fast enough to meet projected AI demand. McKinsey estimates that U.S. data center power consumption could reach 35 GW by 2030, up from roughly 17 GW today—growth that existing grid infrastructure cannot accommodate without massive investment. (McKinsey & Company)
The renewable energy picture adds another dimension. China installed more solar capacity in 2024 than the entire existing solar base of the United States. The country manufactures roughly 80% of global solar panels and 60% of wind turbines, supply chain dominance that extends from raw materials through finished products. When American firms like Elon Musk’s Tesla or Google’s renewable energy division seek to scale clean power, they often find themselves purchasing Chinese equipment—or waiting years for alternatives.
Natural gas presents a different dynamic. America’s shale revolution created abundant domestic gas supplies that power roughly 40% of electricity generation. That’s an advantage China lacks: the country imports significant LNG and burns coal for nearly 60% of electricity. But coal plants can be built quickly and run continuously, while American gas plants face their own permitting and pipeline constraints. The cleaner American grid comes with speed penalties the dirtier Chinese grid does not.
The energy layer reveals a fundamental asymmetry: China treats power generation as strategic national infrastructure, planned and built by state direction. America treats it as a market outcome, subject to permitting delays, environmental reviews, NIMBY opposition, and regulatory fragmentation across 50 states and thousands of utilities. That difference compounds over time. Every year of faster Chinese capacity growth widens the gap that American data centers must somehow close.
Build velocity as competitive advantage
The infrastructure layer—data centers themselves—is where China’s construction prowess becomes operational advantage. Huang put it memorably: “They can build a hospital in a weekend.” The comparison isn’t hyperbole. China demonstrated during COVID-19 that it could erect functioning medical facilities in days. Data centers require more sophisticated engineering, but the underlying capability translates: rapid permitting, abundant labor, coordinated supply chains, and state support at every level.
American data center development operates under different constraints. A typical hyperscale facility in the United States requires 3-5 years from site selection to operation. Environmental impact assessments alone can consume 18-24 months. Interconnection agreements with utilities involve multi-year queues. Local zoning battles, water rights disputes, and community opposition add further delays. The National Association of Manufacturers estimates that permitting timelines for industrial facilities in America have doubled since 2010 while construction costs have risen 40% above inflation.
China’s “East Data, West Compute” strategy illustrates the difference in approach. The central government designated computing clusters in resource-rich western provinces like Guizhou, Inner Mongolia, and Ningxia, then directed infrastructure investments to connect them with data-dense eastern markets through dedicated fiber networks. The plan coordinates land allocation, power generation, water access, and transportation in a single integrated framework. Nothing comparable exists in the United States, where data center siting depends on private negotiations with local authorities, utilities, and landowners.
CoreWeave’s CEO captured the American anxiety: “America’s ability to lead in AI hinges on a simple but urgent question: Can we build the computing infrastructure fast enough to unleash AI’s full potential?” (CoreWeave blog) The company itself has built a multibillion-dollar business providing cloud GPU compute, but even it must navigate the same infrastructure bottlenecks facing everyone else.
The numbers tell the story. JLL Research estimates that total U.S. data center capacity stands at roughly 5,400 MW across major markets, with another 4,700 MW under construction. China’s capacity exceeds 8,000 MW and is growing twice as fast. More importantly, Chinese facilities move from groundbreaking to operation in 12-18 months versus 36-60 months in America. That velocity differential means China can respond to new AI workloads—a breakthrough model requiring massive inference infrastructure, for instance—far faster than American competitors.
Water presents another constraint that favors Chinese planning. Modern data centers require enormous quantities of cooling water—a single hyperscale facility can consume 5 million gallons daily. American drought conditions have already forced Microsoft to reduce operations at some Western facilities. China’s western compute clusters sit near abundant water resources intentionally; the siting wasn’t left to market forces.
The Stargate initiative announced by the Trump administration in January 2025 represents an attempt to close this gap through concentrated investment. The $500 billion project, backed by OpenAI, SoftBank, Oracle, and others, promises to build new data center capacity at unprecedented American scale. But even with regulatory fast-tracking, Stargate facilities won’t come online until 2027 at the earliest. (White House announcement) China, meanwhile, continues building. The infrastructure race operates on accumulated advantage: every year of faster construction creates capacity that generates revenue that funds further expansion. Catching up requires not matching China’s pace but exceeding it for sustained periods.
Chips: the layer where America leads
The chip layer is where American advantages shine brightest—and where export controls have been most aggressively deployed. The United States and its allies control approximately 90% of advanced semiconductor manufacturing through a tight network centered on TSMC in Taiwan, Samsung in South Korea, and ASML in the Netherlands. NVIDIA alone holds roughly 80% of the AI accelerator market. American chip designers at NVIDIA, AMD, and Broadcom set the performance frontier that everyone else chases.
This dominance is real and substantial. NVIDIA’s Blackwell architecture, the current state of the art, achieves commercial yields estimated at 60-80%—meaning most chips off the production line actually work. Huawei’s competing Ascend chips struggle with yields between 5% and 20%, according to SemiAnalysis research. (Tom’s Hardware) That yield differential translates directly into cost and availability: for every 10 chips Huawei manufactures, perhaps 1-2 work; for every 10 NVIDIA produces, 6-8 work. The economics compound viciously against the laggard.
The Atlantic Council estimates that America and its allies maintain a manufacturing lead of 5-15 years over China in cutting-edge logic chips. (Atlantic Council) Export controls have deliberately targeted equipment and know-how required to close that gap. ASML’s extreme ultraviolet lithography machines—essential for manufacturing chips below 7 nanometers—cannot legally be sold to China. Applied Materials, Lam Research, and KLA together provide roughly 40% of global semiconductor equipment and face similar restrictions.
But China is adapting faster than skeptics expected. Huawei’s Ascend 910C, though inferior to NVIDIA’s best, has found buyers willing to accept the performance penalty for supply chain independence. Huawei plans to produce roughly 600,000 Ascend chips in 2026, doubling 2025 output. More significantly, SemiAnalysis found that Huawei’s CloudMatrix system—which links far more chips together to compensate for individual chip weakness—outperformed NVIDIA’s competing system on certain metrics. (Bloomberg) Architecture can partially compensate for process node disadvantages.
Baidu recently activated a computing cluster with 30,000 of its Kunlun chips, which it claims can train models competitive with DeepSeek’s latest releases. China Mobile placed a $139 million order for Baidu’s chips. (Rest of World) Cambricon Technologies reported 4,000% revenue growth in early 2025 and achieved profitability for the first time. The Chinese domestic chip ecosystem is nascent but accelerating under the pressure of export controls.
The deeper vulnerability in America’s chip advantage lies in geography. Taiwan manufactures over 90% of the world’s most advanced chips through TSMC. South Korea’s Samsung produces most of the rest. Both sit within range of Chinese military power. A Taiwan conflict—or even sustained tensions that disrupt shipping—would cripple Western AI development overnight. The Arizona CHIPS Act facilities represent diversification, but they remain years from significant production. Huang acknowledged this at CSIS: “Taiwan really needs to have some acknowledgment for the incredible effort that they’re putting in place to help us reindustrialize the United States.” He noted that a recent TSMC event in Arizona was “two-thirds Taiwanese and one-third American”—a reminder that American chip manufacturing depends heavily on imported expertise.
The chip layer thus presents a paradox. America holds commanding leads in design and equipment. But the manufacturing itself concentrates in vulnerable locations. And China is investing tens of billions annually to reduce its dependencies, while American countermeasures—however effective in the short term—may accelerate Chinese self-sufficiency rather than prevent it.
The export control strategy carries inherent tensions. Restricting NVIDIA from selling to China costs the company revenue—roughly $10 billion annually by some estimates—that would otherwise fund R&D for next-generation chips. Huawei and other Chinese firms fill the vacuum with domestic alternatives that, while inferior today, improve with each iteration. Meanwhile, American allies like Japan, South Korea, and the Netherlands must be continuously persuaded to maintain restrictions against their own commercial interests. The coalition that enforces chip restrictions is real but not automatic; it requires constant diplomatic maintenance.
Memory chips present a different picture. Chinese firms like YMTC have made significant progress in NAND flash memory, though they remain a generation or two behind Samsung and SK Hynix. DRAM memory—essential for AI inference—remains more concentrated in Korean hands. The Micron ban in China demonstrates how both sides can weaponize semiconductor dependencies. But memory is not compute; catching up in memory does not solve China’s AI accelerator problem.
Models, applications, and the long game
The model layer is where public attention focuses most intensely—OpenAI versus DeepSeek, GPT-5 versus Qwen, American proprietary systems versus Chinese open-source alternatives. Here the picture is genuinely mixed.
American labs still lead at the frontier. GPT-5.2, Claude Opus 4.5, and Gemini 3 Pro represent capabilities that no Chinese model has publicly matched. The performance gap at the very top—measured by complex reasoning benchmarks, coding tasks, and multimodal understanding—remains real. American firms also lead in the safety research and alignment work that may prove critical as models grow more capable.
But the gap is narrowing with startling speed. The Wilson Center documented that the performance differential between leading American and Chinese models shrank from 9.3% in early 2024 to just 1.7% by February 2025. (Wilson Center) DeepSeek’s R1 and V3 releases in January 2025 achieved comparable performance to leading Western models at dramatically lower training costs—$5.6 million versus estimates exceeding $100 million for GPT-4. That efficiency advantage matters because it democratizes who can participate in frontier development.
China’s open-source strategy represents perhaps its most significant strategic move. Jensen Huang, speaking in Beijing, was unambiguous: “Models like DeepSeek, Alibaba, Tencent, MiniMax, and Baidu Ernie bot are world class, developed here and shared openly.” He called Qwen and DeepSeek “the best open reasoning models in the world today.” (CNBC) Chinese open-source models grew from 1.2% of global usage in late 2024 to nearly 30% in 2025. (South China Morning Post)
The application layer—where AI transforms actual economic activity—increasingly favors Chinese deployment. China operates roughly 2 million industrial robots, installing approximately 295,000 more in 2024 alone—more than the rest of the world combined. America installed roughly 34,000. (The AI Insider) China’s Ministry of Industry and Information Technology estimates that over 60% of large Chinese manufacturers have adopted AI integration. The country’s 1.4 billion people generate training data at volumes unmatched elsewhere, while its factory floors provide endless deployment opportunities.
The economic implications matter more than benchmark scores. Goldman Sachs estimates that every $1 in AI investment generates $4.90 in economic output. (Bloomberg) But that multiplier depends on what you’re investing in. An AI system automating a factory creates different value than one writing marketing copy. America leads in AI software and services; China leads in AI-as-industrial-infrastructure.
Talent flows shape long-term prospects. The United States outdoes China in top AI researchers, but the gap is narrowing: in 2019, 59% of leading AI researchers worked in America versus 11% in China; by 2022, those figures were 42% and 28%. (The Conversation) China produced 3.57 million STEM graduates in 2020 versus 820,000 in America. Nearly half of the world’s top AI researchers completed their undergraduate studies in China—many currently working in the United States. Whether they stay depends on immigration policy, research funding, and academic freedom. Those factors are policy choices, not laws of nature.
The strategic question is whether America’s top-layer advantages in frontier models can outweigh China’s bottom-layer advantages in energy and build velocity. History offers cautionary precedents. Britain dominated early computing and jet aviation but ceded leadership to nations with larger industrial bases. The Soviet Union matched American nuclear physics but could not match American chip manufacturing at scale. Technological leads are not self-sustaining; they must be constantly renewed through investment in the full stack.
The prediction business is treacherous, but the analysis suggests a neither/nor outcome rather than clear victory for either side. China will likely continue leading in raw energy capacity, construction velocity, open-source model distribution, and industrial AI deployment. America will likely continue leading in frontier model capability, chip design, and research talent—at least for the foreseeable future. The countries are building different AI ecosystems optimized for different purposes: American AI for cutting-edge capability, Chinese AI for mass deployment.
The implications for the rest of the world are significant. Developers in Brazil, Nigeria, and Indonesia increasingly choose Chinese open-source models because they’re free, performant, and come without export restrictions. That choice shapes which ecosystem’s values and dependencies propagate globally. The AI race may ultimately be won not by whoever builds the smartest system but by whoever builds the stack that everyone else adopts.
The investment picture tells part of this story. The United States attracted $109 billion in AI investment in 2024, nearly 12 times China’s total. (RAND Corporation) American venture capital ecosystems fund frontier research that Chinese state-directed investment cannot match for agility. But Chinese investment concentrates differently—more infrastructure, more deployment, less pure research. The comparison is not quite apples to apples.
Private capital also reveals different strategic priorities. American AI investment flows toward consumer applications, enterprise software, and frontier research—chatbots, copilots, and AGI moonshots. Chinese AI investment flows toward industrial robotics, manufacturing integration, and infrastructure. The former produces more impressive demos; the latter produces more deployed systems. Which matters more depends on your definition of “winning.”
The global south increasingly faces a forced choice. Chinese AI systems—open-source, freely available, and unconstrained by export controls—offer developing nations participation in the AI revolution on accessible terms. American systems—more capable at the frontier but proprietary, expensive, and sometimes restricted—offer different tradeoffs. The Belt and Road Initiative’s digital arm now includes AI as a component, bundling models with infrastructure investment. No comparable American program exists at scale.
Huang’s five-layer framework offers no comfort to those seeking simple answers. It reveals instead a competition that will be won through grinding improvements at every layer—power plants and permitting reform as much as neural architecture innovations. The United States must build faster if it wants to leverage its chip and model advantages. China must achieve chip breakthroughs if it wants to leverage its energy and deployment advantages. Neither path is assured. Both require sustained effort over decades.
The race is not about building the smartest AI first. It’s about building the stack that becomes default—the energy supply that powers the data centers, the infrastructure that houses the chips, the chips that run the models, the models that power the applications, the applications that reshape industries. Whoever builds that full stack most effectively will shape how AI transforms civilization. At the foundation, this contest is about concrete and copper as much as silicon and software. Jensen Huang understands that. The question is whether policymakers in Washington and Beijing understand it equally well—and whether they’re willing to make the unglamorous investments in power plants and permitting reform that victory requires.