Technology

Broadcom's Google AI Deal Intensifies Competition with Nvidia

Broadcom announced a major partnership to build Google's future tensor processing units and supply AI rack components until 2031. The deal highlights escalating competition in the AI semiconductor market, where Nvidia currently leads.

Sarah Chen · · · 4 min read · 1 views
Broadcom's Google AI Deal Intensifies Competition with Nvidia
Mentioned in this article
AMZN $213.77 +0.46% AVGO $333.97 +6.21% GOOGL $305.46 +1.82% MU $377.58 -0.05% NVDA $178.10 +0.26%

Broadcom has entered into a significant long-term agreement to collaborate with Google on the development of its next-generation tensor processing units (TPUs) and to supply critical components for artificial intelligence server racks. The partnership, which extends through the year 2031, represents a substantial commitment in the rapidly expanding AI infrastructure landscape. This move directly positions Broadcom as a formidable player in the custom AI chip arena, a domain where Nvidia has established overwhelming market leadership with its graphics processing units (GPUs).

Market Reaction and Financial Implications

Following the announcement, Broadcom's stock surged 6.3% in premarket trading to reach $333.97, reflecting strong investor confidence in the deal's strategic value. In contrast, Nvidia shares experienced a more modest gain of 0.3%, edging up to $178.10. The market response underscores the growing perception that the competitive dynamics within the AI chip sector are evolving. Concurrently, broader market indices like the Nasdaq 100 futures rallied, influenced in part by separate geopolitical developments including a reported U.S.-Iran ceasefire.

In a recent regulatory filing dated April 6, Broadcom formalized its commitment to the multiyear collaboration with Google. The scope includes not only the design and production of future TPU iterations—specialized processors tailored for Google's internal AI workloads—but also encompasses the provision of networking hardware and other essential elements for Google's next-generation AI computing racks. This comprehensive supply agreement locks in a key revenue stream for Broadcom for the remainder of the decade.

Anthropic's Compute Commitment and Revenue Surge

The scale of impending AI infrastructure investment was further highlighted by details from AI firm Anthropic. The company disclosed it has secured access to approximately 3.5 gigawatts of next-generation TPU-based computing power from Broadcom, with deliveries scheduled to commence in 2027. This massive compute commitment coincides with a remarkable acceleration in Anthropic's financial performance. The company reported that its Claude AI model has achieved an annual run-rate revenue exceeding $30 billion, a dramatic increase from the roughly $9 billion recorded at the close of 2025. Anthropic also noted its current infrastructure utilizes a diversified chipset, running Claude on Amazon's Trainium chips, Google TPUs, and Nvidia GPUs.

Nvidia's Trillion-Dollar Ambition and Strategic Moves

Nvidia finds itself at a critical juncture despite its dominant position. Last month, CEO Jensen Huang projected that the company's upcoming Blackwell and Rubin chip architectures could collectively generate over $1 trillion in revenue by the end of 2027—a forecast that notably excludes the potential contribution from the advanced Rubin Ultra variant. Analysts, such as eMarketer's Jacob Bourne, interpret this ambitious target as evidence of "durable demand" for Nvidia's AI solutions. The company is aggressively expanding its focus into AI inference—the process of generating responses from trained models—and agentic AI, where software agents autonomously execute tasks.

In a strategic counter to the rise of custom silicon, Nvidia is working to integrate competing chips into its own ecosystem. On March 31, the company announced that Marvell Technology would join its NVLink Fusion platform. This rack-scale architecture is designed to connect semi-custom application-specific integrated circuits (ASICs) with Nvidia's central processing units, networking gear, and broader software stack. The initiative signals Nvidia's intent to capture a larger portion of the custom-chip market by ensuring these components interoperate seamlessly with its proprietary infrastructure.

Analyst Perspectives on Market Share Shifts

Industry analysts are closely monitoring the mounting competitive pressure in the inference segment. KinNgai Chan of Summit Insights observed that Nvidia currently commands over 90% of the combined AI training and inference market. However, he cautioned that this grip could begin to loosen by 2027 as more large technology firms scale up their in-house custom ASIC programs, particularly for inference workloads. These specialized chips, designed for specific tasks, are becoming increasingly viable at scale.

Broadcom has substantiated its optimistic outlook with concrete financial projections. In March, the company forecast that its AI chip revenue would surpass $100 billion by 2027. This follows a recent quarterly performance where AI-related sales more than doubled to $8.4 billion. CEO Hock Tan cited a "dramatically improved" visibility into future demand through 2027, a sentiment echoed by D.A. Davidson analyst Gil Luria who viewed the guidance as indicative of surging market needs.

Execution Risks and Supply Chain Considerations

Despite Nvidia's formidable scale—evidenced by a 94% year-over-year sales surge to $68.13 billion in the January quarter—the company faces notable execution risks. A Barron's report, citing analysis from KeyBanc's John Vinh, indicated that qualification delays for next-generation HBM4 memory at suppliers SK Hynix and Micron could reduce Nvidia's planned 2026 production volume for its Rubin GPUs from 2 million units to approximately 1.5 million. While Vinh maintains an Overweight rating on Nvidia, any significant delay in the Rubin ramp-up could create an opening for competitors as customers explore alternative suppliers.

The current landscape suggests the AI arms race is broadening rather than contracting. Google is accelerating its TPU development, Anthropic is deploying models across three distinct chip families, and Nvidia is opening its NVLink Fusion to a wider array of custom processors. The central competitive question is shifting from a focus on individual chip performance to the resilience and attractiveness of entire hardware and software ecosystems, determining which platform proves most difficult for customers to replace.

This article is for informational purposes only and does not constitute financial advice or a recommendation to buy or sell any security. Market data may be delayed. Always conduct your own research and consult a licensed financial advisor before making investment decisions.

Related Articles

View All →