Anthropic rents Musk's Colossus, then aims for orbit
/ 15 min read
Table of Contents
The day Anthropic ran out of dirt
Wednesday’s deal makes one thing impossible to ignore: the limiting factor on frontier AI in 2026 is no longer talent, ideas, or even silicon. It is acreage. Anthropic’s partnership announcement with SpaceX, published May 6, hands the Claude maker the entirety of the Colossus 1 supercomputer in Memphis — a single facility that already holds more than 220,000 Nvidia GPUs and was, until last week, the heart of Elon Musk’s xAI. The capacity transfer adds 300+ megawatts of operational compute to Anthropic’s stack inside thirty days, enough power, as Al Jazeera notes, to light roughly 300,000 American homes. Tucked into the same release is a stranger sentence: both companies have agreed to explore “multiple gigawatts” of orbital data-center capacity. Anthropic has not just rented a building. It has rented a runway to space.
Read the deal as a single signal and it is almost too tidy. Anthropic, the lab that has spent eighteen months claiming a grown-up’s lead in safety culture, is now running its frontier models on hardware bankrolled by the man currently suing OpenAI for betraying its nonprofit charter. The story sells itself. But the operational read is louder than the soap opera. Anthropic’s Q1 numbers — including the 80x year-over-year usage growth and a revenue run-rate that briefly eclipsed OpenAI’s for the first time — are colliding with a physical constraint that no amount of post-training cleverness can wave away. Claude Code’s rate limits had been throttling paid tiers. The deal removes that bottleneck overnight, with Pro and Max users seeing limits double and peak-hour caps lifted entirely.
The political subtext lands harder than the technical one. Musk explained his choice with characteristic understatement, telling reporters “no one set off my evil detector,” a line Al Jazeera printed straight and the rest of the press corps quoted with delight. Tom Brown, Anthropic’s chief compute officer and co-founder, stuck to engineering: “We are going to need to move a lot of atoms in order to keep up with AI demand.” Both quotes are right. The Colossus 1 deal is what happens when a fast-growing AI lab discovers it cannot move atoms fast enough using the conventional Microsoft–Google–Amazon triad, and a billionaire with a dormant 220,000-GPU asset discovers his rival’s enemy is, financially speaking, his friend.
The structural shift is what makes this more than gossip. The hyperscalers have anchored frontier-AI compute since GPT-3, but their queues are full. OpenAI booked roughly half of Microsoft’s, Oracle’s, Google’s, and Amazon’s combined cloud backlog in Q1, leaving everyone else fighting over scraps. Anthropic had already secured 3.5 gigawatts of future capacity from Broadcom and Google, banked $8 billion from Amazon, and added Nvidia hardware partnerships. None of that solved the problem of now. Colossus 1 does. The lesson the rest of the field is taking — Mistral, Meta, every Chinese lab, every sovereign-AI consortium — is that bilateral, non-hyperscaler compute deals are no longer a fallback. They are the main game. I called this the hyperscaler shake-up in April; this deal accelerates it.
The orbital footnote is the one nobody can quite parse. SpaceX has openly tied its forthcoming IPO to a thesis that compute belongs in space, where solar capacity factors run above 95% versus the 25–30% Earth realises and where the panel itself, untouched by atmosphere, generates roughly 40% more peak power. Anthropic’s “interest” in collaborating on those orbital facilities is, on its face, a polite memorandum. But Anthropic is also reportedly weeks away from filing for an IPO of its own, with bankers floating $60 billion as the round size and trillion-dollar valuations as the framing. Two pre-IPO companies, one of them already bleeding capacity, the other promising to launch the next generation of capacity, are now joined at the meter. That is not a footnote. That is the entire thesis.
Two-twenty-thousand GPUs and a 30-day handshake
Strip the headline back to its mechanics and the deal becomes legible. Colossus 1 is the South Memphis facility xAI raced into existence in 2024, then expanded through 2025. Per SemiAnalysis’s reporting, the campus today hosts roughly 240 megawatts of operational AI compute on the original site, with another 900 megawatts being built out next door under a Solaris–xAI joint venture targeting Colossus 2 as the world’s first gigawatt single-site AI cluster. The 220,000 GPUs cited in the Anthropic announcement are the H100, H200, and GB200 mix that fills the existing halls. The new tenant’s job is not to wait for Colossus 2 to finish. The new tenant’s job is to load Claude weights onto warm metal that already exists, and to do it before quarterly capacity limits choke the company’s enterprise pipeline. That is why the press release emphasises that the additional capacity arrives “within one month,” a phrasing CNBC quoted verbatim.
The financial scale of the deal is not disclosed, which is itself revealing. Anthropic listed three months ago at a $380 billion post-money valuation when it closed its Series G, with a $30 billion round that immediately attracted secondary offers near $800 billion. The company’s annualised revenue has crossed $30 billion as of early April, growing roughly 1,400% year-over-year. Eight of the Fortune 10 are now Claude customers; more than 1,000 enterprises spend at least $1 million annually. Against that revenue base, an undisclosed multi-year compute lease that lifts immediate constraints on the Pro, Max, Team, and Enterprise tiers is the kind of expenditure boards approve without rounding. SpaceX, for its part, gets the second-largest dedicated-AI tenant in the world to anchor its Memphis campus and a marketing line for the IPO road show. The math is simple enough that the decision was almost certainly closed in days, not weeks.
The downstream product impact is concrete. Claude Code has been Anthropic’s fastest-growing surface for a year, the part of the product where heavy developers actually decide whether to renew. Anthropic’s own announcement now doubles Claude Code’s five-hour rate limits for paid tiers, removes peak-hour reductions for Pro and Max, and “considerably” raises API rate limits on Opus 4.7. Translating that to dollars: a senior engineer paying $200 a month for Max who was hitting throttling on a heavy refactor session now stops hitting it. That changes renewal math. The same lifted limits apply to Team and Enterprise plans, where rate-limit incidents had become an early-warning indicator of churn risk. I noted in my Opus 4.7 review last month that the model had reclaimed the coding crown but was capacity-bound. The capacity argument is now, at least temporarily, retired.
The competitive surface this opens is harder for OpenAI to mirror. OpenAI has its own compute alliances — anchor Microsoft, $50 billion projected 2026 spend, the Cerebras side-deal worth more than $10 billion per the IPO prospectus, and most recently the $122 billion mega-round at an $852 billion valuation. But OpenAI cannot rent xAI’s Colossus, because xAI is OpenAI’s direct rival in the Musk lawsuit. The political optics are blocked. Anthropic, which has stayed deliberately off Musk’s enemy list, can. Result: Anthropic acquires a strategic compute lane its largest competitor cannot replicate, at exactly the moment its largest competitor is busy diluting itself for headline-friendly capital. That is a structural advantage worth more than any single training run.
The orbital piece is what most reporters undersold. SpaceX’s case for space-based data centers is not the science-fiction dramatisation; it is a multi-page engineering argument that has been quietly under construction since Google Research published its feasibility study in late 2025. The Google paper concluded that orbital compute becomes economically competitive with terrestrial data centers when launch costs fall to roughly $200/kg, a number Starship is projected to hit between 2030 and 2035 if it scales to 180 launches per year. Starcloud, the Seattle startup, is already at a $1.1 billion valuation for its bet on the same thesis. NVIDIA in November announced a space-computing initiative anchored by Jetson and Blackwell silicon designed for radiation-hardened deployment. None of this is hypothetical. It is just early. Anthropic’s “expression of interest” buys an option on the lane before the clearing price re-rates.
Where this trade hits the atmosphere
The thesis has plenty of hand-shaped places it could break. Begin with Memphis itself. Colossus 1’s power story is, charitably, contested. The Southern Environmental Law Center has spent eighteen months challenging xAI for operating up to 35 methane gas turbines on the site, an arrangement Shelby County eventually permitted in trimmed form for 15 turbines and roughly 247 megawatts. The NAACP, the SELC, and Memphis residents continue to argue the plant should not be there at all. Anthropic, whose marketing leans heavily on responsible deployment, is now the demand-side anchor for a methane-fired AI factory that exists because the local grid offered only 8 megawatts of reliable interconnect when xAI broke ground. If a future administration tightens its environmental-justice posture, or if a court orders the turbines shut down, Anthropic’s “within one month” capacity surge collapses. The political risk is not academic. I argued in the AI grid crisis piece that Memphis specifically would be the case study; that conjecture now applies directly to Claude.
The Musk variable is the second crack. Musk’s relationship with the AI industry runs on personality, not contract law. He sued OpenAI; he runs xAI as a competitor to both OpenAI and Anthropic; he has feuded publicly with seemingly every CEO in the field. The “no evil detector” quip is amusing exactly until the day Musk decides Anthropic Claude has been too critical of Grok, or too friendly to a Tesla competitor, or insufficiently aligned on one of his shifting causes. The lease contract presumably contains termination protections, but contracts are imperfect insulation against a partner who posts policy decisions on X at 3 a.m.. The market is pricing this risk at near zero today. That is unlikely to remain the resting state.
The third counterpoint is the orbital portion is a sci-fi line with terrestrial consequences. Even charitable estimates — IEEE Spectrum’s, Built In’s, and Google’s own internal analysis — place commercially viable orbital data centers at least a decade out, contingent on launch costs that have not yet hit their published threshold. The most credible 2026 deployments are kilowatt-scale tethered satellites, not the gigawatt-class facilities both companies have publicly described. If Anthropic is implicitly underwriting any portion of SpaceX’s IPO road show with a real or perceived commercial commitment to orbital compute, the gap between the marketing and the engineering becomes a stock-price risk for both firms once the IPO is priced. Skeptics on AI infrastructure Twitter were already calling the orbital section a “ribbon-cutting line,” not a contract.
The fourth crack is regulatory. Anthropic was conspicuously excluded from the Defense Department’s recent ten-company AI agreement and from the CAISI pre-deployment testing sweep until the renegotiation announced this past Tuesday. Now that the company is suddenly running national-scale loads on infrastructure owned by a politically polarising figure who himself runs a competing AI lab, the federal government’s interest in the arrangement will not stay academic. The FTC has already opened informal inquiries into AI compute concentration. Antitrust hawks across both parties have pointed at “compute rent” deals between mega-customers and infrastructure owners as a structural concern. Anthropic and SpaceX are, between them, sitting on what may become exhibit A.
A subtler counterpoint is operational. Compute leases this size impose hidden engineering costs. The Colossus 1 stack runs xAI’s tooling, networking topology, and orchestration layer; Anthropic’s Claude training and inference stacks run on Amazon’s Trainium, Google’s TPUs, Microsoft’s Azure abstractions, and Nvidia’s CUDA-aligned configurations. Migrating production workloads onto a new hardware envelope and a new cooling profile in thirty days is, in the polite phrasing, ambitious. Inference reliability tends to suffer during such migrations, which means the same Claude users who were promised double rate limits could see, instead, a noticeable bump in tail latency or error rates over the first six to ten weeks. The press release does not mention this, of course. The on-call rotation will.
The buyer’s playbook for a compute-rationed year
Where this trade points is unmistakable. Compute is the new sovereignty, and the ownership map is consolidating fast. Anthropic and OpenAI together now control roughly half the world’s near-term AI compute backlog. Meta has guided 2026 capex up to $125–145 billion, nearly double last year. Big Tech’s collective AI infrastructure spend will cross $650 billion this year, the largest single-year corporate investment cycle in history. Within that, the lab–lessor pair is becoming the new center of gravity, displacing the lab–hyperscaler dyad that defined 2023–2025. Anthropic-SpaceX is not the first such pairing — OpenAI-Cerebras, OpenAI-Oracle, and the emerging Mistral-Equinix arrangements are arguably first — but it is the deal that finally puts the pattern on the front page. By December, expect every frontier lab outside the original triad to have its own bilateral compute lessor, and expect investors to value that lessor as a function of which lab anchors it.
The orbital lane is the long-cycle option. The most useful frame is the one Bell Labs operators used to apply to undersea cables: the spend is fifteen years out, the option value is now. Anthropic spending diplomatic capital today on a non-binding letter of cooperation with SpaceX, while Starcloud, Blue Origin, and NVIDIA Space are also sounding the same tone, looks slightly absurd in the present. It will look prescient in a hypothetical 2034 where ground-based gigawatts are politically unpermittable in the United States and where Asian and Middle Eastern markets have priced terrestrial data centers out of their grids. The cost is small. The optionality is large. That is the cleanest case for the deal’s strangest paragraph.
For builders and AI buyers, the immediate consequences are concrete. The compute squeeze that has been throttling Claude usage on every paid plan is, today, materially loosened. Plans that were considering hybrid Claude/GPT setups for capacity reasons rather than capability reasons should re-test the math this week. Pricing will follow capacity, not the other way around; expect rate-limit-driven inference markups to soften across Q3. The political risk premium on AI vendors will rise, not fall. And the next time a frontier lab announces a “safety story” without an attached “compute story,” investors should discount the press release by roughly half.
A short operator’s checklist for the week:
- Re-benchmark your Claude bill. With Pro, Max, Team, and Enterprise rate limits doubled and peak-hour caps lifted, your actual cost-per-token on Claude Code workloads should drop noticeably. Re-run the calculation before locking in any annualized GPT-only contract.
- Audit your single-vendor exposure. If your stack is OpenAI-only because of past Claude rate-limit pain, that justification has weakened. Multi-model failover is no longer a luxury; the Memphis controversy alone makes it prudence.
- Build a “where is your compute hosted?” question into your AI vendor scorecards. Methane-turbine power, sovereignty risk, and antitrust exposure are now first-order procurement variables, not ESG checklist items.
- Treat orbital compute as an option, not a product. Until launch costs hit roughly $200/kg, no production workload belongs in low Earth orbit. But your 2030–2032 capacity planning should include a credible orbital line.
- Watch for the next bilateral. Mistral, DeepSeek, the European AMI Labs consortium, and Cohere are all running the same arithmetic Anthropic ran. Expect at least two more headline compute leases between AI labs and non-hyperscaler infrastructure operators by Labor Day.
- Anthropic’s IPO is now priced into compute. A June filing at a trillion-plus valuation has already been telegraphed by the deal itself. Adjust private-secondary marks, options pricing on related vendors, and your own enterprise term-sheet expectations accordingly.
The shorter version of all of the above: in 2026, the company that controls the watts controls the model. Anthropic just bought 300 megawatts of insurance against running out of either, paid for it in stock-market goodwill it had to spend on Musk, and quietly hedged the rest of the decade by attaching itself to the only company seriously building toward the next compute frontier. It is an inelegant deal. It is also, given the alternative, exactly the right one.
In other news
Hut 8 lands a $9.8 billion AI data-center lease with one tenant. Hut 8 announced May 6 that it had executed a fifteen-year, 352-megawatt lease for the first phase of its Beacon Point campus in Nueces County, Texas, structured as a triple-net agreement with a $9.8 billion base-term contract value and renewal options that could push total value to $25.1 billion (Data Center Dynamics). The tenant is described only as “high-investment-grade.” The stock rose 37% on the news, doubling Canaccord’s price target.
Perplexity rolls Finance Search into the Agent API. On May 6 Perplexity launched a Finance Search tool inside its Agent API, exposing licensed financial datasets, real-time market quotes, and cited web sources through a single tool call billed at $5 per 1,000 invocations (Perplexity API Platform Forum). The launch positions Perplexity to compete directly with Bloomberg-grounded enterprise agents and removes one of the messier integration steps for fintech builders.
Cerebras prices its IPO at a $26.6 billion valuation. The wafer-scale AI chipmaker is selling 28 million shares at $115–$125, targeting roughly $3.5 billion in proceeds ahead of its May 13 Nasdaq debut under the ticker CBRS (CNBC). The prospectus disclosed an OpenAI compute contract worth more than $10 billion through 2028, instantly making Cerebras the first publicly traded AI-chip pure play with frontier-lab anchor demand outside Nvidia.
The Open Compute Project ships a multi-vendor networking spec. AMD, Broadcom, Intel, Microsoft, Nvidia, and OpenAI jointly released the Multipath Reliable Connection specification through the Open Compute Project, formalising a fault-tolerant networking layer designed for hyperscale AI training clusters (SemiAnalysis). It is the rare case of a real cross-vendor standard emerging while the underlying market is still red-hot, and it tilts the field slightly away from Nvidia’s NVLink monopoly on intra-cluster bandwidth.
Anthropic posts an 80x usage growth number for Q1. Inside the same investor materials that surfaced the SpaceX deal, Anthropic disclosed roughly 80x year-over-year growth in API usage and revenue during the first quarter, eight of the Fortune 10 as customers, and more than 1,000 enterprises spending $1 million-plus annually (Air Street’s State of AI: May 2026). The disclosure was the implicit justification for needing 300 more megawatts in thirty days.