At Nvidia’s annual developer conference, GTC 2026, CEO Jensen Huang said the company expects to generate about $1 trillion in total revenue from AI chips and systems by 2027. The estimate includes strong demand for its current Blackwell chips and newly introduced platforms like Vera Rubin, which has already been announced and entered production, with large-scale deployments expected to expand through 2026. Notably, Vera Rubin is designed to handle massive AI workloads across both training and inference.
The latest forecast is much higher than what the semiconductor giant had expected earlier. Previously, the company had estimated around $500 billion in AI-related revenue opportunities. But now, within a short time, that number has doubled. Nvidia’s recent financial performance also supports this claim. The company reported about $215.9 billion in revenue for fiscal year 2026, which is an increase of around 65% compared to the previous year. In its Q4 of fiscal 2026, the firm earned about $68.1 billion, growing more than 73% year-on-year. Out of this, almost $62.3 billion came from its data center business, meaning over 90% of its quarterly revenue is now linked to AI and cloud computing. For the full year, the firm’s data center revenue reached about $193-194 billion.
A big reason behind such demand is Nvidia’s Blackwell chips, which are already being used by large tech companies. Blackwell chips deliver major improvements in performance and energy efficiency compared to earlier generations. But now, the company is shifting focus toward its next-generation platform, Vera Rubin, which is built for extreme-scale AI workloads. Nvidia revealed that these systems can support up to 144 GPUs in a single rack and are optimized for high-throughput AI processing. Combined systems integrating Rubin with other technologies are expected to deliver up to 500× more high-bandwidth memory compared to earlier generations.
Another major reason behind the trillion-dollar forecast is the rapid rise of AI inference, which Huang described as the next phase of computing demand. Unlike training, which happens periodically, inference runs continuously – serving user queries, powering applications, and enabling real-time AI systems. Nvidia estimates that inference workloads could eventually surpass training in total compute demand, significantly expanding the market for its chips.
The demand for Nvidia’s products is mainly coming from large cloud companies like Microsoft, Amazon, and Meta. Around 60% of Nvidia’s revenue now comes from such hyperscale customers, while the remaining 40% comes from enterprises, governments, and other industries.
However, despite all these, the path to achieving $1 trillion in revenue is not without challenges. The semiconductor industry is becoming increasingly competitive, with major tech companies developing their own AI chips. For example, Google uses its custom Tensor Processing Units (TPUs), while Amazon has built Trainium and Inferentia chips for AI workloads. Similarly, Microsoft has introduced its Maia AI chips, and Meta has also developed its own Meta Training and Inference Accelerator (MTIA) chips to run AI workloads.
The Tech Portal is published by Blue Box Media Private Limited. Our investors have no influence over our reporting. Read our full Ownership and Funding Disclosure →