Photo by Luca Bravo on Unsplash
Apple Paid Google $1 Billion for Siri. It Still Can't Ship.
/ 16 min read
Table of Contents
The billion-dollar voice that keeps losing its words
On March 30, Apple released the first developer beta of iOS 26.5. It contained no Gemini-powered Siri features. None. Not the on-screen awareness that was supposed to let Siri see what you are looking at. Not the multi-step task chaining that would let a single voice command book a restaurant, text your friend the address, and add it to your calendar. Not the conversational memory that would finally let Siri remember what you asked it thirty seconds ago. The most anticipated AI upgrade in iPhone history — the product of a multi-year, billion-dollar partnership with Google announced with considerable fanfare on January 12 — has now been pushed to iOS 27 in September, according to MacRumors. That is nine months after the deal was announced. It is eighteen months after Apple first promised a “more personalized” Siri at WWDC 2025. It is, by any measure, a delay that demands scrutiny.
The timeline is worth reconstructing because it reveals a pattern. Apple initially partnered with OpenAI in June 2024 to bring ChatGPT integration to Apple Intelligence, but the scope of that collaboration was narrow — a chatbot living inside a sidebar, not a reimagining of the core assistant. By autumn 2025, Apple was in discussions with Google about something far more ambitious: rebuilding Apple’s foundation models on top of Google’s Gemini architecture and cloud infrastructure. When the deal was announced in January 2026, 9to5Mac reported that the first Gemini-powered features would ship in iOS 26.4, tentatively scheduled for March. Then in February, Apple pushed back the timeline, citing quality issues. Internal testing revealed that the new Siri cuts users off mid-sentence, struggles with complex multi-step requests, processes queries incorrectly, and exhibits unacceptable response latencies. Now iOS 26.5 has arrived without the features either, and the focus has shifted entirely to iOS 27.
The financial context makes the delay more striking. Bloomberg reported that Apple is paying Google approximately $1 billion per year for access to Gemini models, with some analysts estimating the total deal value could reach $5 billion over its multi-year term. That money is flowing to Google right now, regardless of whether Apple ships a single Gemini-powered feature to a single user. Apple is paying for capacity it cannot yet deploy, training data it cannot yet leverage, and a competitive advantage it cannot yet demonstrate to the 1.52 billion iPhone users who are still stuck with a Siri that cannot hold a three-turn conversation. Meanwhile, the company plans just $14 billion in total capital expenditures for 2026, a figure that looks almost quaint next to the $650 billion that Amazon, Google, Microsoft, and Meta are projected to spend on AI infrastructure this year alone. The gap between Apple’s spending and everyone else’s is not just large. It is structural. And the Siri delays suggest that outsourcing the hard parts of AI to a partner does not eliminate execution risk — it may, in fact, multiply it.
Apple’s stock has reflected the anxiety. Shares are down roughly 7 percent year-to-date, and on February 12 the company lost $202 billion in market capitalization in a single session after reports of the Siri delays coincided with FTC scrutiny. At a market capitalization of approximately $3.7 trillion, Apple remains the world’s most valuable company. But its AI narrative — the story investors need to believe to justify that valuation — is fraying at the edges, one missed deadline at a time.
The partnership paradox: why Apple needs Google and why Google needs Apple more
The Apple-Google Gemini deal is one of those arrangements where both sides need each other and neither side entirely trusts the other. For Apple, the calculus is straightforward: it lacks a competitive large language model and needs one before September. Apple Intelligence, the suite of on-device AI features launched with iOS 26, runs on Apple’s own foundation models for simple tasks like summarization and notification triage. But for the complex reasoning, conversational fluency, and multimodal understanding that users now expect from an AI assistant — the capabilities that ChatGPT and Claude deliver daily — Apple’s in-house models are not competitive. The Fortune analysis of the deal was blunt: Apple’s continuing need to rely on external partners “is a worrisome sign, suggesting that Apple, a champion of vertical integration, is still struggling to build its own LLM.”
For Google, the deal is a distribution coup of historic proportions. Gemini gains access to Apple’s 2.35 billion active devices and 1.52 billion iPhone users — a reach that no amount of Google advertising could replicate. The CNBC report on the partnership noted that Google receives complete integration into Apple’s private cloud compute infrastructure, while Apple gets access to Gemini models that it can distill into smaller variants capable of running directly on-device. The financial terms — at least $1 billion annually — represent high-margin revenue for Alphabet at a moment when investors are demanding proof that massive AI spending translates into monetizable products. During Alphabet’s Q4 earnings, analysts pressed management specifically about the Apple deal’s revenue impact, signaling that Wall Street views it as a material catalyst.
The arrangement also reveals something uncomfortable about OpenAI’s strategic positioning. According to the Financial Times via 9to5Mac, OpenAI made a deliberate decision not to become Apple’s custom model provider, choosing instead to focus on building its own consumer hardware to “leapfrog the big tech companies.” That gamble looks increasingly risky. OpenAI walked away from guaranteed distribution to 1.5 billion devices in favor of an unproven hardware bet, while Google walked in and collected both the revenue and the distribution. Sam Altman may believe that owning the device is the endgame, but in the meantime, Google’s Gemini is being woven into the operating system that runs on more than half of all smartphones in the United States. Distribution, as Bill Gates once observed, is not everything — but without it, everything else is academic.
The deeper question is whether Apple’s partnership model can survive the pace of AI development. The company has historically excelled at vertical integration — designing its own chips, its own operating systems, its own services ecosystem. The decision to outsource the intelligence layer of its most personal product to a competitor represents a departure from that playbook that Apple’s leadership clearly finds uncomfortable. Tim Cook’s January comments emphasized that Apple retains the ability to train smaller models on Gemini-derived architectures, and Apple Intelligence will continue to run through Apple’s privacy infrastructure. But the core capability — the reasoning engine that will determine whether Siri can actually compete with ChatGPT, Claude, and Google’s own Gemini chatbot — belongs to Google. That is a dependency that Apple has never tolerated in any other part of its product stack, and the Siri delays suggest it is already creating friction.
Samsung’s parallel strategy provides an instructive contrast. At CES 2026, Samsung co-CEO TM Roh announced plans to double the number of Gemini-powered mobile devices to 800 million by year-end, extending AI features from flagships to mid-tier and budget devices. Samsung is not building its own LLM either, but it is moving faster on integration — shipping Gemini features across its Galaxy lineup while Apple struggles to ship them at all. Samsung’s surveys show that awareness of its Galaxy AI brand has jumped to 80 percent from roughly 30 percent in a single year, a brand velocity that Apple cannot match while its AI features remain in perpetual beta.
Three ways Apple’s AI bet could collapse under its own weight
The optimistic case for Apple is that September’s iOS 27 will deliver a transformative Siri that makes the delays irrelevant. Bloomberg has reported that Apple is testing a standalone Siri app alongside a new “Ask Siri” feature that will give the assistant a chatbot-like interface, and that Apple plans to open Siri to rival AI assistants through an Extensions system that lets users download alternative chatbots from the App Store. If Apple executes on this vision, it could transform Siri from a single-vendor assistant into an AI platform — a move that would leverage Apple’s distribution advantage while hedging against the risk of betting on any one model provider. That is a genuinely compelling strategy, if it ships.
But there are at least three failure modes that deserve serious consideration. The first is the quality trap. Apple has delayed Gemini-powered Siri twice now because the product does not meet its standards. That perfectionism has historically served Apple well — no one remembers the original iPhone shipping without copy-paste because the phone was so transformative that users forgave the omissions. But in AI, the competitive landscape moves quarterly, not annually. Every month that Siri remains a command-and-response utility while ChatGPT, Claude, and Gemini (through Google’s own apps) deliver fluid conversational AI is a month that erodes Apple’s credibility as an AI platform. The MIT study from August 2025 that found 95 percent of organizations getting zero measurable return from generative AI suggests that execution quality matters less than most people think in the enterprise. But consumer AI is different. Users compare Siri to ChatGPT every day, and the comparison is devastating.
The second failure mode is the dependency spiral. Apple is now reliant on Google for its most important AI capability. If Google decides to prioritize its own Gemini chatbot experience on Android over the Apple integration — and there is every incentive to do so — Apple has limited leverage. The Fortune analysis noted that Google’s real prize is not the $1 billion annual payment but the access to Apple’s distribution channel. If Google’s strategic calculus shifts — if, for example, Gemini becomes so dominant that Google no longer needs Apple’s distribution — the terms of the relationship change. Apple, having invested years integrating Gemini into its infrastructure, would face a switching cost measured in engineering years, not dollars. The company that built its empire on controlling every layer of the stack has voluntarily inserted a layer it does not control.
The third failure mode is market timing. The AI landscape in September 2026 will look nothing like it does today. OpenAI’s next-generation model, codenamed Spud, has completed pretraining and is expected to launch in Q2, potentially before Apple’s WWDC in June. Anthropic’s Claude continues to gain enterprise share. Google itself will have shipped multiple Gemini model updates by September, potentially making the version Apple integrates feel dated on arrival. The AI model frontier moves on a cadence measured in weeks, and Apple’s product cycle operates on a cadence measured in months to years. That temporal mismatch is a structural disadvantage that no partnership can fully resolve. Google itself shipped Gemini 3.1 Flash-Lite in early March, a model delivering 2.5x faster responses at one-eighth the cost of Pro — the kind of rapid iteration that keeps developers loyal to a platform. By the time Apple ships its reimagined Siri, users may have already built their AI workflows around ChatGPT, Claude, or Gemini accessed through Google’s own apps — making Siri not just late, but irrelevant to the daily habits that determine which AI assistant wins the consumer market.
The stock market has already partially priced in these risks. Apple’s worst single-day loss of $202 billion was not triggered by a revenue miss or a product failure. It was triggered by the perception that Apple cannot ship AI on time. In a market where the Magnificent Seven’s collective valuation hinges on AI execution, perception and reality converge faster than Apple’s annual product cycle can accommodate. Wedbush called the selloff “unwarranted” and predicted that 2026 would be Apple’s pivotal AI year. But pivotal years require pivots, and Apple has not yet demonstrated that it can pivot at AI speed.
The September equation and the operator’s playbook
The convergence of forces bearing down on Apple in September 2026 is unlike anything the company has faced since the original iPhone launch. iOS 27 is expected to debut at WWDC on June 8 with the reimagined Siri, the standalone Siri app, and the AI Extensions marketplace. The iPhone 17 arrives in the fall alongside a foldable iPhone that Morgan Stanley has called the most catalyst-dense hardware setup in years. If Apple delivers on all of these — if Siri finally works, if the foldable hardware is compelling, if the AI Extensions create a vibrant third-party ecosystem — then the bearish thesis collapses and Apple’s partnership strategy is vindicated as the most capital-efficient AI bet in Big Tech. The company will have spent $14 billion to accomplish what its competitors spent $650 billion to achieve.
That is a big if. But the contrarian case deserves examination because Apple’s history of late-but-dominant market entries is longer than most investors’ memories. Apple was not first to smartphones, not first to tablets, not first to smartwatches, and not first to wireless earbuds. In each case, it entered late with a polished product that captured the majority of profit share within two years. The AI playbook could follow the same pattern: let competitors spend hundreds of billions building the infrastructure, then leverage partnerships and superior product design to deliver the consumer experience that actually works. The 24/7 Wall Street analysis argued that Apple’s “lazy AI strategy” could crush the competition precisely because it avoids the cost structure that is destroying margins at every other Big Tech company.
The problem with the historical analogy is that AI is not a hardware category. When Apple entered the smartphone market, it was competing against Blackberry and Nokia — companies that controlled their own supply chains but lacked Apple’s design sensibility. When Apple enters the AI assistant market in earnest, it will be competing against OpenAI, Anthropic, and Google — companies that control the most powerful reasoning engines ever built and are iterating on them every few weeks. Apple’s design advantage is real, but it is a surface-layer advantage. The value in AI accrues to the model, not the interface. A beautifully designed Siri powered by a model that is two generations behind the frontier is still just a beautiful way to get mediocre answers. And in a world where ChatGPT has surpassed 900 million weekly active users, mediocre answers are not a launch strategy — they are a concession speech.
For operators — product managers, CTOs, enterprise buyers — the Apple situation crystallizes a question that every organization now faces: do you build, buy, or partner your way to AI capability? Apple, the world’s most vertically integrated consumer technology company, chose to partner. If Apple cannot build a competitive LLM with $14 billion in capex and the most talented engineering organization on Earth, what does that say about every company that is smaller, less well-resourced, and less technically sophisticated? The lesson is not that building is impossible. It is that the bar for competitive AI has risen so rapidly that even trillion-dollar companies are choosing to outsource their intelligence layer rather than fall further behind. Whether that choice proves wise depends entirely on what ships in September. Until then, Apple’s AI strategy remains a $1 billion bet on a partner’s technology and a $3.7 trillion market cap built on a promise that has been deferred, again, to next quarter. The world’s most valuable company has told two billion device owners that intelligence is coming. The question is no longer whether Apple can deliver AI. It is whether Apple can deliver AI before its users stop waiting and find it somewhere else.
- Build for optionality, not commitment. If you are integrating AI into your product, design for model-switching from day one. Apple’s dependence on a single model provider is a cautionary tale. Use abstraction layers and the emerging MCP standard to ensure you can swap providers without rewriting your stack.
- Ship incrementally, not annually. Apple’s annual release cycle is a liability in AI. If your organization ships AI features, adopt a continuous deployment model that can incorporate model improvements in days, not months. The companies winning in AI right now are the ones updating their models every two to four weeks.
- Measure adoption, not announcements. Samsung’s 80 percent Galaxy AI awareness figure matters more than Apple’s WWDC keynote. Track whether users are actually engaging with your AI features, not whether you announced them at a conference. The gap between announcement and adoption is where most AI strategies die.
- Watch September. If Apple delivers a transformative Siri, the ripple effects hit every AI-adjacent company. App developers, enterprise SaaS providers, and consumer hardware makers will need to adapt to a world where 2.35 billion devices have a competent AI assistant. If Apple misses again, the window for third-party AI assistants on iOS widens dramatically.
In other news
Salesforce gives Slackbot an AI brain transplant — Salesforce unveiled over 30 new AI capabilities for Slackbot, transforming the workplace messaging tool into what the company calls an “agentic operating system.” The upgraded bot can transcribe meetings, conduct multi-step research sessions, and now operates as an MCP client capable of interacting with 2,600 Slack Marketplace apps. Starting this summer, every new Salesforce customer receives Slack automatically provisioned and AI-powered (TechCrunch).
Nvidia takes a $2 billion stake in Marvell — Nvidia invested $2 billion in Marvell Technology to co-develop “NVLink Fusion,” a platform that integrates Marvell’s custom AI accelerators into Nvidia’s proprietary interconnect fabric. The deal also covers silicon photonics and 5G/6G AI infrastructure, signaling Nvidia’s shift from competing against custom silicon to co-opting it. Marvell shares surged 11 percent on the news (Bloomberg).
OpenAI’s Spud finishes pretraining — OpenAI confirmed that its next frontier model, internally codenamed Spud and likely to be branded GPT-5.5 or GPT-6, completed pretraining around March 24. CEO Sam Altman described it internally as “a very strong model” that could “really accelerate the economy.” Prediction markets point to a Q2 2026 public release, potentially before WWDC (Geeky Gadgets).
Uber and WeRide go fully driverless in Dubai — Uber and WeRide launched fully driverless robotaxi service in Dubai, operating fare-charging rides without a human safety operator in the Jumeirah and Umm Suqeim tourist districts. Uber disclosed a 5.82 percent equity stake in WeRide as part of a commercial partnership targeting 15 additional cities over five years (TechCrunch).
Google ships Gemini 3.1 Flash-Lite for penny-pinching developers — Google released Gemini 3.1 Flash-Lite, a model delivering 2.5x faster time-to-first-token and 45 percent faster output than its predecessor, at $0.25 per million input tokens — roughly one-eighth the cost of Gemini 3.1 Pro. The efficiency play aims to capture high-volume developer workloads where margins matter more than benchmarks (VentureBeat).