
Under the deal being discussed, Meta would start renting compute on Google’s Tensor Processing Units as early as next year, Reuters reports.
Meta would then install Google chips in its own data centers from 2027 onward, which has Nvidia slightly spooked.
Demand is popping
— We are experiencing accelerating demand for both our custom TPUs and Nvidia GPUs, a Google spokesperson said, according to CNBC. — We are committed to supporting both, as we have for years.
We’re delighted by Google’s success — they’ve made great advances in AI and we continue to supply to Google.
NVIDIA is a generation ahead of the industry — it’s the only platform that runs every AI model and does it everywhere computing is done.
NVIDIA offers greater…
— NVIDIA Newsroom (@nvidianewsroom) November 25, 2025
Nvidia has every reason to be confident, though, as both Meta and Google are their customers and know just how well Nvidia’s chips are running their AI efforts.
$51 billion market
At issue is whether Google could devise a plan to make a dent into Nvidia’s data center offering. That business raked in $51 billion just in Q1 2025, according to Tom’s Hardware, and if Google can make just ten percent of that, they’d be earning billions.
In Nvidia’s response, they are arguing that their own chips are highly specialized for AI workloads, whereas «ASICs,» like Google’s TPUs, are more general purpose processors and won’t work as well on AI inference.
Needs to double every six months
Nvidia currently holds more than 90% of the market for AI chips, CNBC reports.
According to Nvidia, Google in constant touch with them to scout out new Blackwell GPUs to run their AI stack, saying just a few days ago that they need to double capacity every six months to meet demand.
Read more: CNBC on the response, Reuters on the original story, and a writeup by Tom’s hHrdware.