
But here’s the truth: None of these numbers are easy to verify — and each company counts differently.
Google uses TPUs, not GPUs
It is difficult to compare apples and oranges and what makes for the best supercomputer, as Google leans heavyly on their own TPUs (Tensor Processing Units), and has an estimated 10-15 Exaflops of compute power. They don’t publish this information, so estimates are more like guesstimates, and it grows by the day.
AWS powers Anthropic with massive compute
Amazon AWS, powering Anthropic, uses lots of their own custom Trainium chips (they wont say how many, and it’s growing fast), but has access to some 800,000 Nvidia GPUs in their cloud. Not all of this is used for Anthropic, though — making it a risky calculation.
xAI has its own «supercomputer»
xAI recently trained Grok 4 on Colossus, which has some ~260,000 GPUs in a variety of Nvidia flavors which they claim is the largest AI supercomputer in the world. Like every other AI lab, they have ambitious scaling plans.
Meta has big ambitions
Then there’s Meta, who currently holds about 600,000 H100s from Nvidia as of 2024 — but have announced massive expansion plans to bring the number up to 1.3 million by years end, and scale up from there — preferring to speak in terms of gigawatts instead of compute.
Nvidia wins big
It’s not easy to compare the AI labs in terms of compute, as they all use different metrics and sometimes have different chips all together, but the only real winner in this game so far is Nvidia — that’s churning out high performing chips at a record pace and is currently the wealthiest company on earth, recently surpassing 4 trillion in market cap..
OpenAI is no stranger to growth ambitions, either — with Sam Altman telling his engineering staff to «get to work figuring out how to 100x that.»