
The system actually consists of six chips, including a CPU, a GPU, an NVlink chip, a NIC, and a DPU and an optics chip.
It can train a «mixture of experts» model with 10x less inference token cost, and a 4x reduction in GPUs compared to the Blackwell platform, Nvidia says.
The platform is now rolling out to nearly every cloud provider, including to Nvidia partners Anthropic, OpenAI and Amazon, according to TechCrunch.
Jensen Huang estimates that AI companies will be spending between $3 and $4 trillion on infrastructure over the next five years.
Read more: Nvidia’s press release, Nvidia’s CES keynote. Writeups on The Verge, TechCrunch.