Photo by William Warby on Unsplash
Samsung Bets $73 Billion It Can Win the AI Memory War
/ 15 min read
Table of Contents
Samsung Electronics announced on Thursday that it will spend more than 110 trillion won — roughly $73.24 billion at current exchange rates — on semiconductors in 2026, the largest single-year investment in the company’s history. The number lands like a thunderclap. Samsung’s 2025 capital expenditure totaled 52.7 trillion won, meaning the new figure represents a roughly 39 percent increase in a single year — an escalation that has no precedent in the global semiconductor industry. It arrives one day after Samsung signed a sweeping Memorandum of Understanding with AMD to become the primary supplier of HBM4 memory for AMD’s next-generation Instinct MI455X GPUs. And it lands in the same news cycle as South Korean press reports that Samsung will supply HBM4 chips to OpenAI for the AI lab’s forthcoming custom silicon. Taken together, the announcements mark Samsung’s most aggressive attempt yet to claw back market share from SK Hynix in the high-bandwidth memory race that now underpins the entire AI infrastructure stack.
The timing is not accidental. Last week, Nvidia’s Jensen Huang stood on stage at GTC and unveiled a three-chip empire spanning Rubin GPUs, Vera CPUs, and a dedicated inference processor — hardware that will consume staggering volumes of high-bandwidth memory. The Semiconductor Industry Association reported that global chip sales rose 3.7 percent month-to-month in January 2026, extending a growth streak driven overwhelmingly by AI demand that pushed the industry past $830 billion in annual revenue in 2025. Meanwhile, Alibaba and Baidu hiked AI cloud computing prices by up to 34 percent because inference demand in China has outstripped supply. The AI memory bottleneck is real, the companies racing to fill it are spending at unprecedented scale, and Samsung just declared it will outspend everyone else on the planet to dominate it.
The biggest chip bet anyone has ever placed
To appreciate the magnitude of $73 billion, consider what it buys relative to the competition. SK Hynix, Samsung’s chief rival in memory and the current king of AI chips that sit beside Nvidia’s GPUs, spent approximately 23 trillion won on capital expenditure in 2025. Samsung’s new figure is more than three times that amount. It exceeds the entire annual revenue of Intel. It dwarfs the combined R&D budgets of every European chipmaker. And it signals that Samsung’s leadership — Vice Chairman and CEO Young Hyun Jun, who assumed the role during a turbulent 2024 — has concluded that catching up in AI memory is not a problem that incremental spending can solve. Samsung also stated in its corporate filing that it is pursuing “meaningful mergers and acquisitions” in robotics, medical technology, automotive electronics, and air-conditioning solutions, according to Yonhap News Agency — a diversification push that signals the $73 billion is not purely a memory play but a broader bet on Samsung’s future as an AI-era conglomerate.
The investment covers both capital expenditure for new fabrication capacity and research-and-development spending on next-generation process nodes. Samsung’s Pyeongtaek mega-complex, already one of the largest chip fabrication sites on Earth, is the anchor for the expansion. It is the same facility where Lisa Su, AMD’s chair and CEO, stood beside Jun on March 18 to sign the MoU that will reshape the competitive landscape of AI memory. The ceremony was not a photo opportunity. It was a deliberate and calculated statement of strategic intent from two companies that have decided their futures are deeply intertwined.
The AMD partnership deserves close scrutiny because it represents Samsung’s most important strategic pivot of the decade. Under the MoU, Samsung becomes the primary HBM4 supplier for AMD’s Instinct MI455X GPU, the next-generation AI accelerator that will power AMD’s Helios rack-scale architecture. Samsung will also supply advanced DDR5 memory for AMD’s sixth-generation EPYC CPUs, codenamed “Venice.” And the two companies will explore a foundry partnership under which Samsung would manufacture future AMD chips — a direct challenge to TSMC’s stranglehold on advanced-node fabrication. The collaboration builds on a nearly two-decade relationship: Samsung was already AMD’s primary HBM3E partner, supplying memory for the current Instinct MI350X and MI355X accelerators. The MoU escalates that relationship from supplier to strategic ally.
Samsung’s HBM4 specifications are formidable. Built on the company’s most advanced sixth-generation 10-nanometer-class DRAM process — designated 1c — paired with a 4-nanometer logic base die, Samsung’s HBM4 delivers processing speeds of up to 13 gigabits per second and maximum bandwidth of 3.3 terabytes per second. Samsung was the first company to begin mass production of HBM4, having started commercial shipments in February 2026. These are not paper specifications. They represent silicon in production, chips shipping to customers, and bandwidth numbers that will define the performance ceiling for the next generation of AI training runs. The question is not whether Samsung can build competitive HBM4. It already has. The question is whether it can build enough of it, fast enough, to overcome the structural advantage SK Hynix built during the HBM3 and HBM3E generations.
Follow the bandwidth, find the bottleneck
The AI industry’s appetite for high-bandwidth memory has become the defining constraint of 2026. Every large language model, every diffusion model, every reinforcement-learning pipeline at frontier scale is bottlenecked not by compute alone but by how fast data can move between processors and memory. HBM solves this by stacking DRAM dies vertically and connecting them with through-silicon vias, creating bandwidth that conventional memory architectures cannot match. When Nvidia designs a GPU like the forthcoming Rubin Ultra, it specifies HBM requirements months in advance, and the memory suppliers compete for those binding contracts in a process that resembles sovereign arms procurement more than normal commercial negotiation.
SK Hynix has dominated that process. The South Korean rival secured roughly two-thirds of Nvidia’s HBM4 orders, a market share advantage that reflects both superior yield rates on earlier HBM generations and deeper engineering integration with Nvidia’s packaging and thermal architectures. Samsung’s HBM3E qualification for Nvidia’s H200 GPUs was delayed repeatedly in 2024 and early 2025, a humiliation for a company that invented DRAM mass production. The delays cost Samsung billions in lost revenue and, more importantly, eroded customer confidence at precisely the moment when AI labs were signing multi-year supply agreements worth tens of billions of dollars.
The earnings season underscored the stakes. Micron Technology reported what should have been a triumphant quarter — AI-driven demand pushed revenue and earnings to record levels — but the stock slipped because investors balked at the scale of Micron’s own capital spending plans, a dynamic that illustrates the central tension in the memory industry right now. The demand is undeniable, but so is the risk of overbuilding. Every memory maker is pouring capital into HBM capacity, and the company that scales fastest without destroying margins will own the decade. Samsung’s $73 billion says it intends to be that company.
Samsung’s AMD-first strategy is a calculated response to the SK Hynix problem. Rather than fighting SK Hynix head-on for Nvidia’s business — a battle where SK Hynix holds the incumbent advantage — Samsung is positioning itself as AMD’s indispensable partner, betting that AMD’s share of the AI accelerator market will grow substantially as hyperscalers diversify their GPU supply chains. The bet has legs. AMD struck a major deal with Meta Platforms to deploy six gigawatts’ worth of GPUs over the next five years, as Mobile World Live reported, a staggering commitment that signals Meta’s determination to reduce its dependence on Nvidia. If AMD captures even 20 percent of the AI accelerator market by 2028, Samsung’s position as AMD’s primary memory supplier would translate into tens of billions of dollars in annual HBM revenue — revenue that bypasses the SK Hynix chokepoint entirely.
The OpenAI supply deal adds another dimension. If Samsung supplies HBM4 for OpenAI’s forthcoming custom AI chip — the silicon that OpenAI has been developing in-house to reduce its own dependence on Nvidia — then Samsung gains a foothold in the custom-silicon segment that could grow to rival the merchant GPU market within five years. OpenAI is not the only lab pursuing custom chips. Google has its TPUs, Amazon has Trainium, Microsoft has Maia, and Anthropic has signaled interest in custom silicon. Each of these projects needs HBM, and Samsung is positioning itself to be the supplier of choice for every company that is not already locked into SK Hynix through Nvidia contracts.
Stitching these data points together yields a proprietary estimate worth pausing on. If Samsung captures primary HBM4 supply positions with AMD, OpenAI, and at least one additional hyperscaler, while maintaining its existing baseline memory business, the company’s semiconductor division could generate $65 to $80 billion in annual memory revenue by 2028 — roughly double the approximately $38 billion it reported in 2025. That projection assumes HBM prices remain elevated due to supply constraints, AMD’s data center GPU share rises to 15-20 percent, and at least two custom-silicon programs reach production scale. No single announcement from Samsung, AMD, or OpenAI contains this figure. It emerges only from combining the $73 billion investment commitment, the AMD MoU scope, the OpenAI supply report, and industry-wide HBM demand forecasts. If the estimate proves accurate, Samsung’s semiconductor division alone would be worth more than most S&P 500 companies.
The ways this $73 billion gamble could backfire
Samsung has bet big before and lost. The company’s foundry division has struggled for years to match TSMC’s yield rates on cutting-edge process nodes, burning through billions in capital expenditure without achieving competitive parity. There is no guarantee that the same execution challenges will not plague Samsung’s HBM4 ramp. Yield rates on vertically stacked memory are notoriously sensitive to process variation — a single defective die in a 12-high stack renders the entire package worthless. SK Hynix solved this problem earlier and more decisively than Samsung, which is why SK Hynix still commands the majority of Nvidia’s orders even as Samsung ships its own HBM4 product. Spending $73 billion does not automatically solve yield problems. It buys capacity, but capacity without yield is expensive scrap.
The AMD dependency also carries risk. AMD’s data center GPU business, while growing rapidly, remains a fraction of Nvidia’s. If AMD’s Instinct MI455X fails to deliver competitive performance against Nvidia’s Rubin architecture — or if AMD’s software ecosystem cannot match Nvidia’s CUDA advantage — then Samsung’s primary HBM4 customer could face slowing demand growth at the worst possible time. Samsung would be left with excess HBM4 capacity optimized for AMD’s specific packaging and interface requirements, capacity that cannot simply be redirected to other customers without significant requalification. The big tech sector committed roughly $690 billion to AI infrastructure in 2025-2026, but that spending is not evenly distributed across GPU vendors. Nvidia captures the overwhelming majority, and Samsung’s bet on AMD assumes that majority will shrink.
There is also the macroeconomic risk. Samsung’s $73 billion plan assumes that AI infrastructure spending will continue accelerating through 2026 and into 2027. If the AI revenue curve for enterprises disappoints — if the gap between AI capital expenditure and AI revenue generation persists or widens — then hyperscalers could slow their hardware procurement cycles. The memory boom could reverse into a glut within 18 to 24 months if demand softens. The semiconductor industry has a painful history of cyclical busts following investment booms. Samsung’s own memory division posted operating losses in 2023 during the last downcycle, and the scale of the 2026 investment means the amplitude of any future correction would be correspondingly larger.
China introduces another variable. Samsung’s investment plan does not exist in a vacuum. Chinese memory makers, particularly CXMT (ChangXin Memory Technologies), have been aggressively expanding DRAM production capacity with state subsidies that distort normal market economics. While Chinese fabs are not yet competitive in HBM — the vertical stacking and advanced packaging requirements create a technology gap of several years — they are rapidly gaining ground in conventional DRAM, which could pressure Samsung’s baseline memory pricing and margins. If Samsung’s conventional memory revenue declines due to Chinese competition at the same time that its HBM ramp faces yield or demand challenges, the $73 billion investment could strain even Samsung’s considerable balance sheet. The company reported net cash exceeding 100 trillion won in early 2026, providing substantial cushion. But cushion and immunity are different things.
The geopolitical dimension adds yet another layer of uncertainty. The United States continues to rework AI chip export controls, and any tightening of restrictions on memory exports to Chinese AI companies could reduce Samsung’s addressable market. Samsung manufactures chips primarily in South Korea, which provides some insulation from U.S. export rules. But Samsung also operates a large fabrication complex in Austin, Texas, and it maintains significant sales exposure to Chinese customers who are racing to build domestic AI infrastructure before potential new restrictions take effect. The interplay between Korean trade policy, American technology controls, and Chinese industrial subsidies creates a three-body problem that no amount of capital investment can fully hedge against.
Where the memory war goes from here
Samsung’s $73 billion investment transforms the competitive dynamics of AI memory from a two-horse race into a full-blown arms race. SK Hynix will respond — it has no choice. The company has already been scaling up production capacity to meet AI demand and will likely accelerate its own HBM4 ramp in response to Samsung’s AMD partnership. Micron, the third player in the memory triad, has positioned itself as a capacity alternative for hyperscalers seeking supply-chain diversification, and its record revenue quarter proves that there is enough demand to support three major HBM suppliers — for now. The beneficiary of this three-way competition is the AI industry itself: more HBM supply means lower memory costs, which translates into cheaper inference and training for every AI lab, startup, and enterprise buyer.
The AMD-Samsung axis bears watching because it could reshape the GPU market’s competitive structure. Lisa Su’s willingness to fly to Pyeongtaek and sign a ceremony-grade MoU with Samsung’s CEO signals that AMD views memory partnership as a strategic differentiator, not merely a procurement relationship. If AMD can offer hyperscalers a fully integrated stack — Instinct GPUs with guaranteed Samsung HBM4 supply, EPYC CPUs with optimized DDR5, and a Helios rack architecture designed from the ground up for AI workloads — it would represent the most credible alternative to Nvidia’s end-to-end dominance since Google introduced TPUs nearly a decade ago. The foundry partnership discussion adds a long-term dimension: if Samsung fabricates future AMD chips, it would create a vertically integrated supply chain that rivals the TSMC-Nvidia relationship in both depth and strategic alignment.
For operators, investors, and AI practitioners, here is what to track in the quarters ahead:
- Samsung’s HBM4 yield rates: The company has not disclosed production yields, and the gap between Samsung and SK Hynix on this metric will determine whether Samsung’s capacity investments translate into competitive shipment volumes. Watch for third-party teardown analyses and customer qualification announcements.
- AMD Instinct MI455X benchmarks: Samsung’s memory strategy is only as strong as AMD’s GPU performance. When AMD publishes MI455X benchmarks — expected mid-2026 — compare them against Nvidia Rubin results to gauge whether Samsung backed the right horse.
- OpenAI custom chip timeline: If Samsung’s HBM4 supply deal with OpenAI materializes into production silicon, it would validate Samsung’s diversification strategy beyond AMD. Watch for OpenAI tape-out announcements and sampling timelines.
- SK Hynix counter-moves: SK Hynix will not cede ground passively. Expect expanded partnerships, aggressive pricing on HBM4, and potentially accelerated development of HBM4E or next-generation memory architectures. The competitive response will reveal how seriously SK Hynix views the Samsung threat.
- Chinese memory capacity growth: CXMT and other Chinese DRAM manufacturers are expanding rapidly. If they begin qualifying for non-HBM AI workloads — such as DDR5 for inference servers — the margin impact on Samsung’s conventional memory business could offset gains in the premium HBM segment.
- Macroeconomic signals on AI spending: Monitor hyperscaler capital expenditure guidance each quarter. If Meta, Microsoft, Google, or Amazon signal any deceleration in AI infrastructure spending, the demand assumptions underlying Samsung’s $73 billion bet could weaken rapidly.
Samsung has placed the largest chip investment in corporate history on a thesis that the AI memory bottleneck will persist and intensify. The thesis has strong supporting evidence today. But “today” is the easy part. The $73 billion question is whether Samsung can execute at the yield rates, production volumes, and customer timelines required to turn record spending into record returns — before the next semiconductor cycle turns cold.
In other news
Xiaomi unmasked as Hunter Alpha’s creator — The trillion-parameter AI model that appeared anonymously on OpenRouter last week, sparking speculation it was DeepSeek V4, has been identified as a product of Xiaomi, the Chinese smartphone and EV maker. Xiaomi CEO Lei Jun simultaneously announced the company will invest at least $8.7 billion in AI over the next three years, signaling a pivot from hardware margins to foundation-model ambitions.
OpenAI joins Anthropic and Google in the race for cheaper AI — OpenAI announced pricing cuts and new efficiency-focused inference modes, joining a broader industry push to reduce the cost of AI compute as competition intensifies among frontier model providers.
Adobe Firefly expands video and image creation — Adobe rolled out new AI capabilities for Firefly, including expanded video generation and custom model training for enterprise customers, pushing deeper into the generative media tools market that Runway and Pika have pioneered.
Congress drafts AI regulation bill — Lawmakers introduced a draft bill that would revamp the online landscape with new disclosure requirements for AI-generated content and liability frameworks for AI systems deployed in high-stakes decisions, marking one of the most comprehensive federal AI proposals to date.
Micron posts record revenue but stock drops on spending fears — Micron Technology reported record quarterly revenue fueled by AI memory demand, but shares slipped after the company disclosed capital expenditure plans that exceeded analyst expectations, reflecting Wall Street’s unease with the scale of investment the AI memory boom now demands.