4Q25 Earnings Takeaways:
-
Outperformance: NVIDIA beat consensus on both revenue and earnings, demonstrating continued AI demand strength.
-
Data center dominance: The data center segment remains the core growth engine, with very high margins.
-
Bullish guidance: Guidance well above expectations further supports the AI growth narrative.
-
Strategic momentum: New platforms (e.g., Vera Rubin), partnerships, and performance leaps were reiterated on the call.
- Strong Revenue guidance: ~$78 billion ±2% - well above the ~72 billion consensus estimate. This suggests analysts see strong sequential growth continuing into FY27 driven by data center demand.
Key Drivers:
AI/Data Center Growth
Product Momentum
-
Management highlighted strong performance per watt and cost-efficiency advances from Blackwell and NVL72/GB300 systems.
-
First Vera Rubin systems have shipped as samples - with broader deployment expected later in the year.
Profitability and Margins
Other Segment Performance
Earnings Call / Conference Highlights
-
CEO Huang stated the "agentic AI inflection point has arrived" - signaling a shift toward autonomous, large-scale AI compute demand.
-
Highlighted enterprise adoption, cloud AI investment, and NVIDIA's entrenched role in training and inference.
-
CFO Colette Kress emphasised robust data center demand, continued backlogs, and initial Rubin shipments.
Risks & Uncertainties
-
Ongoing geopolitical / China export restrictions could dampen future China-related revenue.
-
Management acknowledged strong consensus expectations and noted mixed investor sentiment over long-term sustainability.
Analysis new Vera Rubin AI chips
Vera Rubin represents a step change in Nvidia's AI infrastructure roadmap - trading higher raw power draw for massively better energy efficiency, more compute per rack, and operational improvements in cooling and serviceability. Its premium pricing reflects these gains and Nvidia's market position as a leader in high-end AI:
Performance Efficiency Leap
-
Vera Rubin reportedly delivers up to 10× more performance per watt than the existing Blackwell-based AI systems, according to Nvidia's disclosure to CNBC. This efficiency gain comes even though the system as a whole may draw roughly twice the total power of Blackwell machines - meaning it's doing far more useful work for each watt consumed.
-
This 10× figure aligns with other reports suggesting Rubin platforms can dramatically cut inference costs per token and boost throughput at scale compared with Blackwell.
-
Why this matters: Performance per watt is broadly the key metric for data-center AI infrastructure - it governs energy cost, rack density, cooling demands, and the economics of deploying large LLM inference and training workloads.
Architectural Evolution
-
Vera Rubin isn't just a new GPU - it's a full rack-scale AI system that includes the next-generation Rubin GPU architecture and a new Vera CPU, interconnected in a tightly liquid-cooled system.
-
Compared with Blackwell, Rubin systems are designed to sustain much higher sustained throughput, especially in reasoning and long-context inference tasks - a major focus as AI models become more complex.
Cooling & Efficiency
-
NVIDIA has highlighted that the liquid cooling solution on Vera Rubin uses much less water than older cooling approaches, even as rack power densities climb. This is important operationally because cooling costs often rival compute costs in large data centers.
-
Unlike some prior designs that depend heavily on chilled water and energy-intensive air conditioning, Rubin's advanced liquid cooling helps reduce water use and electrical overhead for thermal management.
Expected Pricing and Market Position
-
AI market research firm Futurum Group projects Vera Rubin racks could be priced around ~25% above Blackwell systems - reflecting both the performance uplift and Nvidia's continued pricing power in the AI infrastructure market.
-
Estimated price ranges from reports suggest millions of dollars per rack for top-end Rubin configurations, depending on density and features - significantly more investment for hyperscale deployments than older Blackwell gear.
Strategic Implications
-
Efficiency as a Competitive Moat
The 10× performance-per-watt claim, if realized broadly in production workloads, could let cloud providers and enterprises run larger AI models without proportional increases in energy and cooling costs - a major differentiator.
-
Premium Pricing Justified by Value
The expected ~25% markup reflects not just a generational hardware upgrade but the economics of AI scaling: faster models, lower per-token costs, and reduced operational overhead can justify higher upfront capex.
-
Infrastructure Shifts
The move toward denser, liquid-cooled racks with higher total electrical loads suggests Vera Rubin will push data center operators to rethink power, cooling, and floor-planning strategies - but also unlock new tiers of AI performance.
------------------------------
Carlos Salas
Portfolio Manager & Freelance Investment Research Consultant
------------------------------