Yesterday, Reuters reported that Jensen Huang walked onto the stage at the SAP Center in a leather jacket, in front of a packed house, and described what $1 trillion in chip orders looks like.
That number, purchase orders for Blackwell and Vera Rubin combined through 2027, is double what Nvidia projected a year ago. Nvidia shares rose 2% on the day. The crowd was enthusiastic in the way that crowds get when the person on stage is, by most available measures, the most important person in the room.
Here is what was actually announced. The Groq 3 Language Processing Unit, Nvidia’s first chip from the $20 billion Groq acquisition it completed in December, ships in Q3. It is built to handle inference, the part of AI that generates responses in real time, and it sits alongside Vera Rubin in a rack configuration that holds 256 LPUs. The Kyber architecture, Nvidia’s next rack design after Rubin, integrates 144 GPUs vertically to boost density and cut latency. It arrives in 2027 as Vera Rubin Ultra. Further out, Huang previewed Feynman, built on a 1.6-nanometer process, which would be the smallest in the industry by a significant margin. Nissan, BYD, Geely, Hyundai, and Isuzu are building Level 4 autonomous vehicles on Nvidia’s Drive Hyperion platform. NemoClaw, an open source enterprise agent platform, was introduced for companies trying to deploy AI agents at scale with some governance attached.
Huang used the word “agentic” a lot. He used it on Nvidia’s earnings call last month too, about a dozen times. That repetition is not accidental.
So what is actually being built here, underneath the product names and the roadmap slides?
Nvidia already holds roughly 80% of the AI training chip market. What GTC 2026 was, in plain terms, was the company announcing its intention to own inference too. Training is how you build an AI model. Inference is how it runs in the world every time someone uses it. Every query, every agent action, every automated decision, every token generated by every AI product used by every person or company on earth runs on inference hardware. Nvidia, which already built the roads, is now announcing it wants to build the engine inside every car on them.
The CPU announcement is the part that gets less coverage but deserves attention. Agentic AI, the kind where software systems take actions autonomously across multiple steps, requires something to sit in the middle and orchestrate. That job falls to the CPU. Nvidia’s own infrastructure head told CNBC this week that CPUs are now the bottleneck, and Nvidia has a CPU designed specifically for this. Meta is already running it in their data centers.
There is a RAM shortage worth knowing about too. The demand for AI infrastructure has created supply constraints that run downstream into phones, laptops, and consumer electronics. Gaming GPU releases are delayed. The silicon is going to the data centers. This is what it looks like when an industry reorganizes its supply chain around a single application.
What Huang described yesterday, across two hours and several product lines, is a vertical stack. Chips for training. Chips for inference. CPUs for orchestration. Rack architectures for scale. Software platforms for enterprise deployment. Autonomous vehicle systems. Robotics. The only thing Nvidia does not make is the model itself, and the companies that make the models need Nvidia to run them.
That is not a chip company anymore. That is closer to the physical layer of a new kind of internet, one where intelligence is the thing being transmitted, and Nvidia is building the pipes, the switches, and increasingly the routers.
The question that does not fit neatly into a keynote is what happens to everything downstream of this concentration. When one company supplies the infrastructure that every AI product in every industry depends on, the dynamics start to look less like a technology market and more like a utility. The difference being that utilities are regulated and Nvidia, for now, is not.
The leather jacket plays well in San Jose. The $1 trillion number plays well on earnings calls. The thing worth watching is what the world looks like when the megastructure is finished.


