
NVIDIA (NASDAQ:NVDA) Chief Financial Officer Colette Kress outlined the company’s product roadmap and demand outlook during JPMorgan’s virtual fireside chat at the 2026 Consumer Electronics Show, describing what she called three simultaneous “transitions” driving accelerating compute needs: the shift from CPU-based computing to accelerated computing, the expansion of generative AI, and an emerging move toward “Agentic AI.”
Kress said these transitions are “all occurring” and are contributing to “an exponential growth in terms of our compute,” while also setting the stage for what NVIDIA is calling Physical AI—robotics and other AI systems that interact with the real world.
Vera Rubin platform: six-chip system and second-half launch
She emphasized that Vera Rubin should be viewed as a co-designed data center infrastructure “at scale,” rather than a single chip or a rack-level component. Kress described it as a portfolio of six chips designed together, listing: Rubin (GPU), Vera (CPU), the company’s next NVLink generation, Spectrum-X “SuperNIC,” BlueField, and a switch for CPO.
Kress highlighted several performance and cost characteristics for the full system, including a claim that the new platform can reduce “time to drain down to 1/4” compared with Blackwell, deliver “10x higher throughput,” and achieve “1/10th lower token cost” during inferencing.
Physical AI: early revenue and automotive tie-ins
In the Q&A, JPMorgan analyst Harlan Sur pressed on whether Physical AI is already financially material to NVIDIA’s data center revenue. Kress said the company is already earning revenue from automotive work, citing Mercedes as an example of a customer bringing high-end self-driving capabilities to market after “eight years” of work. She also pointed to the combination of data center infrastructure used to process and train on collected data, alongside compute inside vehicles.
Looking forward, Kress said the learnings and simulation capabilities developed for automotive “carries very nicely” into robotics, and she referenced NVIDIA’s Jetson and Omniverse platforms as well as a focus on open-source models for Physical AI use cases.
Demand, supply planning, and customer spending into 2027
On supply constraints for the second-half Vera Rubin ramp, Kress said NVIDIA has been planning capacity well ahead of near-term needs, noting that building a data center infrastructure system can take “three quarters to a year” from start to finish and that supply purchasing has been “in the works for a couple of years.” She said NVIDIA feels “very solid” about supply for the new calendar year and is comfortable with what has been ordered and confirmed, while acknowledging that longer-term growth will continue to depend on how much additional capacity suppliers can add.
Sur also asked about customer spending trajectories into calendar 2027. Kress said customers are already thinking about how to plan for 2027 deployments—particularly around “land, power, and shell”—because of the multi-year timeline to stand up new data centers. She added that NVIDIA still sees unmet demand in 2026 and that customers are also looking for “quick adds” in 2026 at the same time they plan for 2027.
Kress revisited NVIDIA’s earlier disclosure that the combined opportunity for Blackwell and Vera Rubin through 2026 was “about $500 billion,” and said demand has continued to increase since that figure was discussed. She said NVIDIA is now receiving orders for Vera Rubin and is working with customers to plan “a full year of volume.” While she did not quantify an updated number, she stated that the $500 billion figure “has definitely gotten larger,” adding that the company is beginning to look beyond 2026 as well.
Networking attach and Spectrum Ethernet momentum
Sur asked about the rising importance of networking as customers move toward rack-scale systems. Kress said the company tracks networking “attach rate” rather than dollars, and stated that attach is “nearing 90%” for customers buying full systems.
She characterized networking as essential to scaling and managing AI workloads, pointing to the company’s portfolio spanning InfiniBand and Ethernet, as well as NVLink. Kress said NVIDIA has seen strong adoption of Ethernet capabilities and argued that GPU compute alone is insufficient without networking to handle complexity across training and inference.
Asked whether networking growth should track compute growth, Kress said she expects “nearly the same, if not more” attach rate going forward, while noting that timing differences can occur depending on when networking is installed during a data center buildout.
China H200 licensing, Groq IP deal, gaming, and gross margins
On China, Kress said NVIDIA was “very pleased” the U.S. government approved sales of the H200 into China, but emphasized that shipments still require customer-specific licenses. She said customers have requested licenses and that NVIDIA is waiting for the U.S. government to complete the process, describing officials as working “feverishly” on it. Kress said NVIDIA has heard strong demand signals from China customers and wants to be prepared as purchase orders and licensing approvals come together, but noted that timing is outside the company’s control.
Kress also addressed NVIDIA’s recently announced non-exclusive IP licensing deal with Groq, saying NVIDIA acquired both Groq IP and “an exceptional team” that has joined NVIDIA. She described Groq’s work as focused on low-latency inferencing and said NVIDIA is excited about collaboration, but did not provide timing for products that might result from the deal.
On gaming, Sur noted the absence of new GeForce announcements at CES and asked about memory supply and allocation priorities. Kress said gaming has been “a home run,” and that NVIDIA initially underestimated growth at the beginning of the Blackwell cycle but has since brought supply “up to a good level.” She said demand remains strong and that NVIDIA intends to serve as much demand as possible, while suggesting the company would provide more detail later in the year.
Finally, Kress discussed gross margins after Sur asked about levers to protect profitability amid rising input costs. She reiterated NVIDIA’s goal of maintaining “mid-70s” gross margins, describing the effort as requiring coordinated work across suppliers, manufacturing, and internal teams. She pointed to efforts such as improving cycle time and execution across multiple platforms, and referenced steps underway for Vera Rubin as well as “GB300” as the company manages product mix and system complexity in the year ahead.
About NVIDIA (NASDAQ:NVDA)
NVIDIA Corporation, founded in 1993 and headquartered in Santa Clara, California, is a global technology company that designs and develops graphics processing units (GPUs) and system-on-chip (SoC) technologies. Co-founded by Jensen Huang, who serves as president and chief executive officer, along with Chris Malachowsky and Curtis Priem, NVIDIA has grown from a graphics-focused chipmaker into a broad provider of accelerated computing hardware and software for multiple industries.
The company’s product portfolio spans discrete GPUs for gaming and professional visualization (marketed under the GeForce and NVIDIA RTX lines), high-performance data center accelerators used for AI training and inference (including widely adopted platforms such as the A100 and H100 series), and Tegra SoCs for automotive and edge applications.
