Anthropic logo on a smartphone
News

Anthropic Reveals $30B Revenue Run Rate, Massive Google Chip Deal

3 minute read
Michelle Hawley avatar
By
SAVED
Anthropic's revenue has tripled as it locks in 3.5 gigawatts of next-gen Google TPUs — but Broadcom flags financial risks.

Anthropic disclosed Tuesday that its annualized revenue has reached $30 billion — more than tripling from roughly $9 billion at the end of 2025 — as it announced a sweeping agreement with Google and Broadcom to secure multiple gigawatts of next-generation computing power.

Table of Contents

3.5 Gigawatts of AI Compute

The deal, which emerged through a Broadcom regulatory filing, calls for Anthropic to consume approximately 3.5 gigawatts of computing capacity powered by Google's next-generation Tensor Processing Units beginning in 2027. To put that figure into perspective, 3.5 gigawatts is enough electricity to power approximately 3 million homes.

Under the terms outlined in the filing, Broadcom will develop and supply Google's future generations of TPUs under a long-term agreement, while a separate supply assurance deal covers networking and other components for Google's next-generation AI server racks through 2031.

"This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure," said Krishna Rao, Anthropic's chief financial officer. "We are making our most significant compute commitment to date to keep pace with our unprecedented growth."

The announcement paints a picture of an AI industry where demand for computing power is growing so rapidly that companies are locking in infrastructure commitments years in advance.

Related Article: 13 Top AI Chip Companies

Revenue Surge Reflects Booming Enterprise Demand

Anthropic's revenue disclosure is striking for the sheer speed of its growth. The company's key business metrics tell the story:

  • Annualized revenue: $30 billion, up from ~$9 billion at the end of 2025
  • Enterprise customers spending $1M+ annually: Over 1,000, up from ~500 in February

The numbers suggest that enterprise adoption of AI tools is accelerating sharply, with large organizations committing significant budgets to integrate AI into their operations.

Broadcom Flags Financial Risks

Despite the scale of the agreement, Broadcom's regulatory filing struck a cautious tone about one of its newest major customers. The chipmaker stated that Anthropic's consumption of the expanded computing capacity "is dependent on Anthropic's continued commercial success," and noted that the parties involved are in discussions with unspecified "operational and financial partners."

The language amounts to a formal acknowledgment that deploying 3.5 gigawatts of custom AI chips for a single customer carries meaningful financial risk — enough that Broadcom felt obligated to flag it for investors. While Anthropic's revenue growth is impressive, the company remains a privately held startup that has yet to demonstrate sustained profitability, and the computing infrastructure it is committing to will require enormous capital outlays.

For Broadcom, the Google partnership is a significant validation of its custom chip business. CEO Hock Tan argued in his company's Q1 2026 earnings call that major cloud providers lack the expertise to design advanced accelerators on their own, and has predicted that Broadcom's AI chip revenue alone will exceed $100 billion in 2027. Building next-generation TPUs for Google would be a major step toward that target.

A Multi-Cloud Strategy

Anthropic was careful to emphasize that the Google deal is just one piece of its computing strategy. The company said it trains and runs Claude across hardware from multiple providers — including AWS Trainium chips, Google TPUs and Nvidia GPUs — allowing it to assign different workloads to whichever chips are best suited for the task.

Amazon remains Anthropic's primary cloud provider and training partner, and the two continue to collaborate on Project Rainier, a large-scale AI computing initiative. Claude is also available through Microsoft Azure, making it the only frontier AI model offered on all three of the world's largest cloud platforms.

The vast majority of the new computing capacity will be located in the United States, Anthropic said, describing the deal as an extension of its November 2025 pledge to invest $50 billion in domestic computing infrastructure.

Related Article: How Anthropic Taught Claude to Build Full Apps — and Grade Its Own Work

The Bigger Picture

The AI industry's appetite for computing power continues to outpace even the most aggressive projections. As companies like Anthropic race to develop more capable AI systems and serve surging customer demand, they are striking infrastructure deals of a scale and duration more commonly associated with energy or telecommunications companies.

Learning Opportunities

Whether Anthropic can sustain the growth trajectory needed to justify commitments of this magnitude remains an open question, one that Broadcom, at least, has made clear it is watching carefully.

About the Author
Michelle Hawley

Michelle Hawley is an experienced journalist who specializes in reporting on the impact of technology on society. As editorial director at Simpler Media Group, she oversees the day-to-day operations of VKTR, covering the world of enterprise AI and managing a network of contributing writers. She's also the host of CMSWire's CMO Circle and co-host of CMSWire's CX Decoded. With an MFA in creative writing and background in both news and marketing, she offers unique insights on the topics of tech disruption, corporate responsibility, changing AI legislation and more. She currently resides in Pennsylvania with her husband and two dogs. Connect with Michelle Hawley:

Main image: https://stock.adobe.com/images/the-anthropic-logo-appears-on-a-smartphone-screen-in-this-photo-illustration-in-ontario-canada-on-march-5-2026/1937271054?prev_url=detail
Featured Research