
Under the deal being discussed, Meta would start renting compute on Google’s Tensor Processing Units as early as next year, Reuters reports.
Continue reading “Nvidia responds to report that Meta might use Google’s TPU chips”

Under the deal being discussed, Meta would start renting compute on Google’s Tensor Processing Units as early as next year, Reuters reports.
Continue reading “Nvidia responds to report that Meta might use Google’s TPU chips”

This comes as almost every other Big Tech company is increasing data center spending and Google has set aside $93 billion in «capital expenditures» this year to do the same.
This will be followed by a «significant increase» in 2026, but likely not matching OpenAI’s enormous $1.4 trillion data center spending.
They are aiming to «spend a lot,» and hit «the next 1000x in 4-5 years,» Vahdat is reported to have said.
That’s one thousand times more «capability, compute and storage networking» that he aims to add for «essentially the same cost, power and energy spend.»
Read more: The scoop at CNBC, writeups at Ars Technica and Gizmodo.

— We’re increasingly going to be customers of each other. We will use Anthropic models, they will use our infrastructure and we’ll go to market together, Microsoft CEO Satya Nadella said, according to Reuters.
The agreement means Anthropic will optimize their software stack to better run on Nvidia’s hardware, while Nvidia will «optimize for Anthropic workloads.»
Continue reading “Anthropic partners with Nvidia, Microsoft in $30 billion Azure deal”

— This investment will create thousands of jobs, provide skills training to college students and electrical apprentices, and accelerate energy affordability initiatives throughout Texas, Alphabet CEO Sundar Pichai said, according to Reuters.
The investments will be made through 2027, but Google says nothing of when they will come online.
They will also bring new funding for the power grid to support 6 gigawatts of «new energy generation and capacity» and will support some 1,700 new electrical apprenticeships with Google support.
This makes Texas the second largest data center state in the USA, after Virginia, notes the Texas Tribune.
— They say that everything is bigger in Texas — and that certainly applies to the golden opportunity with AI, Pichai said.
Read more: Google’s blog, writeups on Reuters, Texas Tribune.

They are building out data centers because «it’s the right strategy to aggressively front-load capacity so we’re prepared for the most optimistic cases,» according to CEO Mark Zuckerberg.
Meta saw an 83% drop in its operating income the last quarter, and plans for capital expenditures of $70 billion just this year, but will have increase it threefold to reach their target by 2028.
Read more: Reuters, The Register and Business Insider.

Gemini is launching to orbit
Google’s latest moonshot might almost be literal. They are preparing for sending their TPU processors into low-earth orbit, and maybe then build a proper AI data center in space — where there is ample sunlight to provide it with energy. They have already tested a TPU in orbit conditions in a particle accelerator and it survived, and the next step is the launch of two prototype satellites in early 2027. They call it Project Suncatcher, and say that «in the future, space may be the best place to scale AI compute.»
More at: Sundar Pichai’s tweet, Google’s announcement blog
Google close to Apple deal for AI Siri
Apparently, Apple has chosen Gemini for its upcoming AI version of the Siri assistant. They will use what is likely a custom version of the model with 1.2 trillion parameters, running on Apple’s Private Cloud Compute servers. Apple supposedly also tested options from OpenAI and Anthropic, but Anthropic’s fees were too high and Apple already partners with Google for search results. The deal will cost Apple $1 billion a year, far less than the $20 billion Google pays Apple to be their search provider.
More at: Bloomberg, MacRumors, TechCrunch.
Read on for more!

The coming AI wars will be fought with data centers and gigawatts, and nobody wants to lose out.
Continue reading “Big Tech doubles down on even bigger AI spend”

Starting up in late 2026, just like the AMD and Nvidia deals — OpenAI will have added 26 gigawatts of capacity from these agreements alone, and one can wonder how capable the future of GPT will be.
Continue reading “Broadcom to supply OpenAI with 10 GW’s worth of custom chip capacity”

Their CEO has previously said that they aim for adding 1 gigawatt per week in the future, adding about a Hoover dam every fourteen days — and says many more deals are yet to come.
In the A16z podcast by Andreessen Horowitz today, Altman says «You should expect much more from us in the coming months», as reported by TechCrunch.
Continue reading “Altman says more, «aggressive» deals to come off heels of AMD agreement”