At its GTC conference in San Jose, Nvidia unveiled a bold projection of at least $1 trillion in total addressable revenue for its artificial intelligence chips through 2027. The announcement accompanied the introduction of new Vera Rubin central processing units and systems powered by Groq technology, signaling a strategic push into the AI inference market where models generate responses for end-users.
The Inference Inflection Point
Chief Executive Jensen Huang declared that the "inference inflection has arrived," highlighting a shift in enterprise AI spending from primarily training models to deploying them at scale. This transition represents the next major wave of investment, as companies move beyond development to serving hundreds of millions of users with operational AI applications.
Nvidia's latest architectural approach divides the AI workflow between specialized components. The Vera Rubin chips handle the prefill phase, processing user requests and converting them into tokens that AI models can understand. Groq hardware then manages the decode phase, generating the actual responses delivered to users. The company confirmed its Vera Rubin platform has reached full production, with configurations running seven chips across five rack-scale systems.
Expanding Manufacturing Partnerships
In a significant development for its supply chain, Nvidia revealed that Samsung Electronics will manufacture the Groq LP30 chip using a 4-nanometer process technology. Shipments are scheduled to commence in the second half of 2026, with the news boosting Samsung's shares by as much as 5% in Seoul trading on Tuesday.
The company's financial momentum continues following its fiscal 2026 revenue of $215.9 billion reported last month. For the current quarter, Nvidia anticipates approximately $78 billion in sales. The stock traded around $183.22 on Tuesday, giving the semiconductor giant a market valuation of approximately $4.53 trillion.
Beyond Single Chips to Complete Systems
Industry analysts note Nvidia's evolution from selling individual processors to offering comprehensive rack-scale solutions encompassing computing, networking, and storage components. This approach positions the company to capture a larger portion of each AI data center deployment, moving up the value chain in infrastructure provisioning.
Market observers point to Nvidia's dominant position, currently holding close to 90% of both training and inference markets according to Summit Insights Group. However, analysts anticipate potential erosion by 2027 as customers increasingly develop application-specific chips, particularly for inference workloads where competition from Google's custom silicon and processors from Intel and Advanced Micro Devices is intensifying.
The revised projection represents a substantial increase from Nvidia's February estimate of a $500 billion market opportunity for its Blackwell and Rubin architectures through 2026. The near-doubling of this forecast underscores the company's confidence in sustained demand for AI infrastructure, though investors continue to evaluate whether the broader AI spending wave will translate into durable financial gains.
Nvidia's strategic emphasis on inference reflects recognition that the initial phase of AI investment focused on model training is maturing. The company aims to keep investors focused on the growing demand for serving trained models rather than just creating them, even as questions persist about long-term market sustainability and competitive dynamics in the rapidly evolving AI hardware landscape.



