Photo by Jordan Harrison on Unsplash
Meta's $135 Billion AI Gamble Is Already Unraveling
/ 15 min read
Table of Contents
Mark Zuckerberg told Wall Street in January that 2026 would be “a pivotal year for delivering personal superintelligence.” Two months later, Meta’s flagship next-generation AI model — codenamed Avocado — has been pushed back to at least May after internal benchmarks showed it falling short of Google’s Gemini 3.0, OpenAI’s GPT-5.4, and Anthropic’s Claude Opus 4.6 in logical reasoning, software development, and agentic task execution. The company that once dominated AI’s open-source frontier with Llama is now reportedly discussing a move that would have been unthinkable twelve months ago: temporarily licensing Gemini from Google to power its own products while its engineers scramble to close the gap. The delay arrives against a backdrop of record capital expenditure — up to $135 billion committed for 2026, a 73% increase over last year — and a strategic pivot from Llama’s open-source philosophy to a closed, proprietary model designed to compete head-to-head with the very companies Meta may now need to lean on.
The numbers tell the story of a company betting everything on a thesis that has not yet produced its centerpiece artifact. Meta generated $200.97 billion in revenue in 2025, a 22% year-over-year increase, and its advertising engine remains the most profitable attention machine in technology. But the $115-to-$135 billion capex range for 2026 dwarfs the capital budgets of every peer except Microsoft, and the bulk of that spending flows into a newly created division called Meta Superintelligence Labs, anchored by a $14.3 billion acquisition of 49% of Scale AI that brought twenty-seven-year-old founder Alexandr Wang into Zuckerberg’s inner circle. Nine months after that deal closed, the super team’s first deliverable is a model that slots somewhere between Gemini 2.5 and Gemini 3.0 — good enough to beat Meta’s own previous generation, but not good enough to justify the narrative that Meta can build a proprietary frontier lab from scratch. This is the story of the most expensive AI bet in corporate history, why it is wobbling, and what the fallout means for every company building on or against Meta’s infrastructure.
The $135 billion cathedral and the model that was not ready
Meta’s capital expenditure trajectory reads like a hockey stick drawn by someone who lost their sense of scale. The company spent $37 billion on capex in 2024, scaled to $72.2 billion in 2025, and then guided Wall Street to a range of $115 billion to $135 billion for 2026. At the midpoint, that is roughly $125 billion — more than the GDP of 120 countries, more than the entire 2025 revenue of Alphabet, and nearly twice what the United States spent on its interstate highway system in inflation-adjusted dollars. Meta is not alone in this spending frenzy — the broader big tech AI capex surge now exceeds $690 billion across the industry — but its growth rate is the steepest. The money is flowing into GPU clusters, custom silicon, and a constellation of new data centers designed to train models at scales that, as recently as 2024, existed only in research papers.
The thesis behind this spending is coherent in the abstract. Zuckerberg believes that the company that builds “personal superintelligence” — an AI system capable of acting as a persistent, personalized agent across every Meta surface — will capture the next trillion-dollar platform shift. The logic: if Meta can embed a frontier model into Facebook, Instagram, WhatsApp, Messenger, and its nascent metaverse products, every user interaction becomes an AI interaction, every AI interaction generates proprietary data, and every proprietary data point sharpens the model in ways no competitor can replicate. It is a flywheel strategy, and on paper it is the most defensible AI moat any consumer company could build.
The problem is that the flywheel requires a frontier model, and Meta does not have one. Avocado was designed to be that model — the successor to the Llama series, trained on Meta’s expanded infrastructure, and positioned as the centerpiece of the company’s next phase of AI development. Instead, internal evaluations revealed that Avocado’s performance in logical reasoning, coding, and autonomous task planning lags the current generation of models from Google, OpenAI, and Anthropic. The gap is not trivial. On the benchmarks that matter to enterprise buyers and developer platforms — the domains where revenue concentration is highest — Avocado reportedly performs below Gemini 3.0, a model that Google released in November 2025 and has since iterated to Gemini 3.1 Pro with a one-million-token context window. In the language of competitive positioning, Meta is spending 2026 money to build a 2025 model.
CFO Susan Li warned analysts in January that 2026 expenses would grow “at a significantly faster rate” than revenue, driven primarily by infrastructure and personnel costs. Arete Research downgraded Meta to Neutral on March 5, cutting its price target from $732 to $676 and flagging the widening gap between Meta’s spending trajectory and its AI revenue realization. META stock slipped 1.4% in pre-market trading on Friday after the Avocado delay became public. The market’s message is straightforward: Wall Street blessed the $135 billion plan because it assumed a frontier model would justify the spending. Without that model, the blessing erodes.
From Llama’s open prairie to Avocado’s walled garden
The strategic irony at the center of Meta’s predicament is that the company built its AI credibility on the exact opposite of what it is now attempting. The Llama series — Llama 2 in 2023, Llama 3 in 2024, and Llama 4 in early 2025 — was the most successful open-source AI play in history. By releasing model weights freely, Meta created a developer ecosystem that no proprietary competitor could match on distribution alone. Llama models were downloaded hundreds of millions of times, integrated into thousands of commercial products, and adopted by governments, universities, and startups that could not afford to license from OpenAI or Google. The open-source strategy served a clear business purpose: it commoditized the model layer, which meant that competitive advantage would accrue to the company with the best distribution and the most user data. Meta had both.
Then Llama 4 disappointed. Released in early 2025, it failed to captivate the developer community the way its predecessors had, and benchmark comparisons showed it trailing GPT-5 and Claude Opus 4 on the tasks that mattered most to production developers: code generation, multi-step reasoning, and instruction following. The disappointment triggered what multiple sources describe as a major internal shake-up. Zuckerberg concluded that open-source development had become a competitive liability — that releasing weights allowed rivals to study Meta’s architectural choices, leapfrog them with proprietary improvements, and then lock in enterprise customers behind API walls while Meta gave away its innovations for free.
The result was a strategic reversal. Avocado was designed from the start as a closed-source, proprietary model — the first in Meta’s AI history to abandon the open-weight tradition entirely. The shift carries enormous implications. Open-source Llama gave Meta soft power: developer goodwill, academic citations, regulatory credibility as a “responsible” AI company that believed in transparency. Closed-source Avocado trades all of that for the promise of higher margins and tighter competitive moats. Proprietary models allow Meta to control distribution, charge for premium access, and integrate the technology exclusively into its own product surfaces. The economic logic is sound if the model is good enough. If it is not — if Avocado launches in May as a mid-tier system in a market that already has four or five frontier-class alternatives — then Meta will have surrendered its open-source credibility for nothing.
The Gemini licensing discussions crystallize the dilemma. Multiple outlets have reported that Meta’s leadership explored the possibility of licensing Google’s Gemini technology on a temporary basis to power certain consumer AI features while Avocado is brought up to competitive performance. No final decision has been confirmed, but the fact that the conversation happened at all inverts Meta’s entire narrative. The company that spent $14.3 billion to acquire Scale AI’s talent and $135 billion to build its own infrastructure is now evaluating whether to rent someone else’s model. In the AI arms race, that is the equivalent of a defense contractor outsourcing its weapons to a rival nation’s factory.
The five ways this bet could blow up
The bull case for Meta’s AI strategy rests on the assumption that Avocado’s delay is a timing problem, not a capability problem. Models improve. Engineers iterate. The $135 billion in infrastructure will eventually produce something competitive, and Meta’s distribution across 3.3 billion daily active users will do the rest. The bear case is more complicated, and it has at least five distinct failure modes that investors and competitors should be tracking.
The first is execution risk at the Superintelligence Labs. Reports have surfaced about internal tensions surrounding Alexandr Wang’s role and influence, though Meta’s official spokesperson dismissed them as “silly.” Regardless of the corporate messaging, building a frontier lab from scratch is one of the hardest organizational challenges in technology. OpenAI took seven years to reach its current capability level. Google DeepMind has been at it for over a decade. Meta is attempting to compress that timeline into eighteen months, with a team assembled through acquisition rather than organic cultivation. The $14.3 billion price tag for Scale AI bought data infrastructure and a brilliant founder, but it did not buy the institutional muscle memory that comes from years of model training at frontier scale.
The second failure mode is margin compression. Meta’s advertising business generates operating margins above 30%, but the AI infrastructure build-out is suppressing free cash flow at a pace that has caught the attention of even bullish analysts. The Motley Fool warned that Meta’s $135 billion capex plan could “backfire on shareholders” if AI revenue does not materialize quickly enough to offset the infrastructure drag. Reality Labs continues to lose money — roughly $16 billion in 2025 — and the combination of metaverse losses and AI infrastructure spending creates a capital allocation problem that even $200 billion in annual revenue cannot easily absorb. The math: if Meta’s AI products do not generate meaningful incremental revenue by mid-2027, the cumulative infrastructure spend will have consumed more than $300 billion in three years with limited visible return.
The third risk is competitive acceleration. While Meta delays Avocado, its rivals are not standing still. Google released Gemini 3.1 Pro with a one-million-token context window and a 77.1% score on ARC-AGI-2. OpenAI’s GPT-5.4 “Thinking” model scored 83% on the GDPVal benchmark, which evaluates AI performance on economically valuable tasks. Anthropic’s Claude Opus 4.6 and Sonnet 4.6 have been deployed across enterprise environments with one-million-token context windows in beta. Every month that Avocado remains in the lab, the frontier moves further away. A model that launches in May as competitive with March’s state of the art will be trailing by the time it ships.
The fourth concern is the talent market. The AI industry is experiencing an unprecedented war for researchers, and Meta’s troubles may make it harder to recruit. If the internal narrative is that the Superintelligence Labs cannot ship a competitive model, top candidates will gravitate toward labs with proven track records. The dynamic is self-reinforcing: talent gaps produce weaker models, weaker models produce talent gaps.
The fifth and most existential risk is strategic confusion. Meta is simultaneously trying to be an open-source champion (Llama still powers much of the ecosystem), a closed-source frontier lab (Avocado), a consumer AI platform (Meta AI across its apps), an enterprise AI provider, and a metaverse company. The cognitive load of maintaining five distinct strategic identities with a single infrastructure budget creates execution risk that compounds with every quarter. Companies that try to be everything often end up being nothing particularly well. The history of technology is littered with conglomerates that believed scale alone could substitute for focus — and the AI era punishes diffusion of effort faster than any previous computing cycle because the frontier moves on a monthly cadence.
Where the chips fall and what operators should do next
The Avocado delay does not mean Meta’s AI strategy is dead. It means the strategy is in its most vulnerable phase — the gap between promise and delivery where narratives are fragile and capital allocation decisions harden into permanent competitive positions. Here is where the trajectory most likely leads, and what operators, investors, and builders should be doing right now.
Meta will almost certainly ship Avocado in May or June, and it will be a respectable model — probably competitive with Gemini 3.0 and GPT-5.3, if not quite at the level of the latest frontier releases. The company has too much capital, too much talent, and too much institutional pride to ship something embarrassing. The question is not whether Avocado will work; it is whether Avocado will justify the narrative of Meta as a frontier AI company capable of competing with purpose-built labs. A model that debuts as the fourth or fifth best in the market does not support a $135 billion capital program. It supports an incremental product improvement.
The Gemini licensing question will become a recurring theme in AI over the next twelve months. As model capabilities converge at the frontier, the cost of maintaining a bespoke model training pipeline becomes harder to justify for companies whose primary business is not AI research. Meta makes money from advertising. Apple makes money from hardware. Amazon makes money from e-commerce and cloud. All three are spending tens of billions on AI infrastructure, and all three will face the same question Meta is confronting now: at what point does it make more sense to license the best available model and focus your engineering talent on the application layer? The answer will reshape the AI industry’s structure. If frontier model training consolidates to three or four labs — OpenAI, Google DeepMind, Anthropic, and perhaps one Chinese competitor — then the rest of the industry becomes an applications and integration layer built on top of those foundations.
For Meta specifically, the next ninety days will determine whether the company’s AI strategy retains credibility or becomes a cautionary tale about capital misallocation. If Avocado ships in May with competitive benchmarks and strong integration into Meta’s consumer products, the narrative resets. If it slips again, or if it launches to middling reviews, the pressure on Zuckerberg will intensify from shareholders who are watching $135 billion in capex with increasingly narrow patience.
Here is what operators across the AI ecosystem should be doing right now:
- Enterprise buyers evaluating Meta AI products should maintain optionality. Do not sign long-term commitments to Meta’s AI stack until Avocado ships and independent benchmarks confirm its competitive positioning. Use this window to evaluate Google’s Gemini for Government, Anthropic’s newly launched Claude Partner Network, and OpenAI’s enterprise offerings.
- Developers building on Llama should assume that future Meta models may not be open-weight. Begin testing your applications against at least one proprietary alternative so that a licensing change does not strand your product.
- Investors in META stock should watch three metrics over the next two quarters: Avocado benchmark results versus Gemini 3.1 Pro and GPT-5.4, the pace of AI-driven revenue attribution in Meta’s earnings calls, and any further analyst downgrades tied to capex concerns.
- Competing AI labs should view Meta’s delay as a window, not a victory. The infrastructure Meta is building will eventually produce competitive models — $135 billion buys a lot of iteration cycles. The strategic advantage goes to labs that use this window to lock in enterprise customers and build switching costs before Avocado arrives.
- Regulators and policymakers should note the irony of Meta’s pivot. The company that argued most loudly for open-source AI as a public good is now building a closed, proprietary system. That shift weakens the case for laissez-faire AI regulation premised on the availability of open alternatives.
The broader lesson from Meta’s Avocado stumble is that capital alone does not buy frontier AI capability. OpenAI has spent a fraction of Meta’s budget and leads on most benchmarks. Anthropic has raised roughly $15 billion total and competes at the frontier with models that enterprise customers are adopting at record pace. Google DeepMind has decades of institutional knowledge that no acquisition can replicate. Meta has the money, the distribution, and the data. What it does not yet have is the model. And in the AI industry of 2026, the model is still what matters most.
In other news
Elon Musk admits xAI “was not built right,” begins rebuilding from scratch — Less than six weeks after merging xAI with SpaceX in a deal valued at $1.25 trillion, Musk acknowledged the AI startup must be “rebuilt from the foundations up” after losing 10 of 12 original co-founders. The company has poached two senior engineers from Cursor and carried out a round of layoffs targeting underperforming coding products.
Anthropic commits $100 million to the Claude Partner Network — Anthropic launched a formal enterprise channel program backed by $100 million in partner investment, enlisting Accenture, Deloitte, Cognizant, and Infosys. Accenture alone is training 30,000 professionals on Claude, and Deloitte has opened Claude access to its global workforce of 350,000.
Google Maps gets its biggest AI upgrade in a decade — Google rolled out “Ask Maps,” a Gemini-powered conversational feature that answers complex location queries using data from more than 300 million places and 500 million community contributors, alongside redesigned 3D Immersive Navigation with real-time road details including lanes, crosswalks, and traffic lights.
Morgan Stanley warns a “shocking” AI breakthrough is imminent — A new Morgan Stanley research note predicts a transformative leap in AI capabilities during the first half of 2026, citing OpenAI’s GPT-5.4 “Thinking” model scoring 83% on the GDPVal benchmark — a test designed to measure AI performance on economically valuable tasks — and warning that the pace of progress will strain power grids and disrupt labor markets faster than most institutions can adapt.
OpenAI plans to bring Sora video generation into ChatGPT — Facing slowing installs for the standalone Sora app, OpenAI is reportedly preparing to integrate its AI video generator directly into ChatGPT, exposing the tool to the platform’s 900 million weekly active users. The company has also surpassed $25 billion in annualized revenue as of February.