Shares of NVIDIA Corporation experienced a notable decline in Tuesday's trading session, shedding $1.98 to close at $200.08. The movement followed emerging reports that Alphabet's Google is engaged in discussions with Marvell Technology regarding the development of new artificial intelligence chips. This development arrives just ahead of the Google Cloud Next conference in Las Vegas, scheduled to commence on April 22, 2026, and underscores a broader industry trend where major cloud service providers are actively working to lessen their dependence on NVIDIA's graphics processing units (GPUs).
Strategic Shift Toward Inference and Custom Silicon
The timing of these reports is particularly significant as the AI hardware market increasingly pivots toward the inference phase—the stage where trained machine learning models generate responses and predictions for end-users. Google's strategic focus appears aligned with this shift. The company's Chief Scientist, Jeff Dean, has previously articulated the technical rationale for designing specialized chips tailored for either training or inference workloads separately. Industry analysts echo this sentiment.
"The competitive battleground is shifting towards inference," noted Chirag Dekate, an analyst at Gartner. He emphasized that in this new phase, operational metrics like speed, power efficiency, and total cost are becoming as critical as the raw computational scale that dominated the model-training era.
Details of the Reported Partnership
According to information reported by Reuters, Google is negotiating with Marvell on two fronts: a memory processor designed to work alongside Google's existing Tensor Processing Units (TPUs), and a next-generation TPU itself, engineered to run AI models with greater efficiency. This initiative builds upon Google's established, long-term TPU supply agreement with Broadcom, which extends through 2031. These moves collectively signal how cloud infrastructure leaders are accelerating their investments in custom silicon solutions beyond NVIDIA's industry-standard GPUs, which currently form the backbone for both AI training and inference workloads. Both Google and Marvell declined to comment on the reported discussions.
Even incremental signs of customer diversification attract intense scrutiny, given NVIDIA's extraordinary financial performance, which has been largely fueled by its data center segment. In its February earnings report, the company posted a record $68.1 billion in revenue for the fourth quarter, with a staggering $62.3 billion originating from its data center business. Looking forward, NVIDIA provided guidance projecting $78 billion in revenue for the current quarter, while explicitly noting that this forecast does not include any data-center compute revenue from the Chinese market.
Broader Market Context and Demand Signals
This activity does not suggest a downturn in overall AI investment demand. Key semiconductor industry bellwethers Taiwan Semiconductor Manufacturing Company (TSMC) and ASML both raised their financial forecasts just last week. TSMC's CEO, C.C. Wei, pointed to "very strong signals" from cloud clients regarding continued AI investment, clear evidence that hyperscale companies are not pulling back on their capital expenditures.
In a related research note published Sunday, analysts at Morgan Stanley highlighted the emerging trend of "agentic AI"—systems capable of handling complex planning and actions with minimal human intervention. They suggested this evolution might drive chip spending beyond GPUs to include greater allocations for central processing units (CPUs) and memory, even as demand for graphics processors remains robust. Intel, a major supplier of server CPUs that often pair with NVIDIA's GPUs, had previously cautioned that the most severe supply constraints for those companion chips were evident in the first quarter.
Competitive Landscape and NVIDIA's Response
NVIDIA is not confronting an imminent collapse in orders. The more nuanced risk, as perceived by market observers, is that the company could see its rapid market share gains in the inference segment slow if its largest customers increasingly adopt in-house or partner-designed custom silicon. "Rivals ... will want to grab a piece of the market," stated Russ Mould, investment director at AJ Bell. He added that customers are also broadly motivated to diversify their supplier base for strategic and supply-chain resilience reasons.
Nevertheless, Google is not positioned to displace NVIDIA's dominance in the near term. Company executives have stated that Google's infrastructure runs a combination of its own TPUs and NVIDIA's GPUs. Furthermore, other tech giants like Meta Platforms are only in the early stages of trialing TPUs for specific tasks. Google itself faces internal supply constraints as demand for its proprietary chips continues to climb.
NVIDIA is not passively observing this trend. Last month, the company announced an expansion of its partnership with Marvell, utilizing its NVLink Fusion technology—a system designed to seamlessly integrate custom chips, like those potentially developed by Google and Marvell, directly into NVIDIA's own server and networking hardware ecosystem. Reuters separately reported that NVIDIA has invested approximately $2 billion into Marvell, an effort aimed at smoothing this technical integration for its customers and maintaining its central role in AI infrastructure regardless of the underlying processor architecture.


