Google’s next data center in Minnesota will have the world’s largest battery

Google’s energy in Minnesota won’t lead to higher electricity costs for consumers, they say. (Picture: Google)
The data center in Pine Island will have 1.9 gigawatts of capacity, sourced from new wind and solar power from Xcel Energy.

This will then be attached to a 300 megawatt iron-air battery from Form Energy, ensuring continuous service to operations.

That will be the world’s largest commercially deployed battery, and can provide power to the data center for a whopping 100 hours, TechCrunch notes.

As part of their buildout, Google is announcing that they will pay for their electricity in full, and will also invest $50 million in Xcel Energy’s green energy program to place batteries across their grid.

— Google’s partnership with Xcel Energy reimagines how data centers can be served, Google writes.

Read more: Google’s announcement, Xcel’s presser, CNBC and TechCrunch.

Meta to purchase 6GW of custom AMD AI chips, take up to 10% ownership

Joining forces with AMD, Meta is set to receive hundreds of thousands of custom inference chips. (Picture: Meta)
According to Reuters, the deal is worth $60 billion, runs over five years, and comes hot on the heels of another Meta deal with Nvidia earlier this month.

The agreement will see the first gigawatt of GPUs and CPUs delivered in the second half of 2026, and includes several as-yet-to-debut rackable chips.

AMD and Meta have long been partners in developing custom chips, but these are specifically built for inference, the process of creating answers for user queries.

— This is an important step for Meta as we diversify our compute, says Mark Zuckerberg on the deal.

AMD entered into a similar deal with OpenAI in October, 2025, meaning it might soon be 20% owned by AI labs.

Read more: AMD press release, Meta’s release, and Reuters report.

The hundreds of billions in AI investments flowing into India

The Indian AI Summit was a great success for the country, securing hundreds of billions in investments. (Picture: AI Impact Summit)
After the conclusion of the Indian AI Impact Summit, the chips are in on massive investments into data centers in the country.

It’s OpenAI’s second biggest market and looks set for a massive buildout of AI capacity in the years leading up to 2035.

Adani dishing out the dollars
First out is the Adani Group, pledging $100 billion to build data centers by 2035, which they expect to trigger «an additional $150 billion in secondary investments.» This should scale their data center portfolio from 2 gigawatts to 5 GW.

Continue reading “The hundreds of billions in AI investments flowing into India”

Nvidia strikes «multi-year strategic partnership» with Meta for AI chips

Likely costing a significant measure of Meta’s capital expenditures, the deal is expected to be in tens of billions dollars or more.
Both Meta and Nvidia are announcing a long-term, multi-generational strategic partnership today — without mentioning the price.

Meta, already a top customer for Nvidia, will use their chips in a «large-scale deployment» to build out data centers «optimized for AI training and inference,» they say.

The cost of the deal will likely run into the tens of billions of dollars or more, CNBC reckons, and includes access to future chips as well as the current Blackwell and Vera Rubin generations.

— We do expect a good portion of Meta’s capex to go toward this Nvidia build-out, chip analyst Ben Bajarin of Creative Strategies tells CNBC.

Reuters notes that Meta is likely one of the top three customers accounting for more than half of Nvidia’s sales.

Read more: Meta announcement, Nvidia announcement. Writeups on CNBC, Reuters and The Verge.

ByteDance is developing in-house AI chips, to be manufactured by Samsung

Nvidia chips are available in China, but users need permission to buy them. (Picture: Adobe)
Not much is known about the AI inference chips, or how they compare to Nvidia’s offerings, but ByteDance is going to be making about 100,000 of them «this year,» and then scale up to 350,000 units, according to Reuters.

ByteDance has been known to work with US chip producer Broadcom, and started seriously hiring chip specialists in 2022.

The new chips are set to be produced with Samsung in a deal that includes memory chips, which definitely sweetens the deal.

Production is advanced enough that Reuters’ sources say engineering samples are due by late March, which is the last stage before production.

A spokesperson for the company does not deny the report outright, but claims the information is «inaccurate,» Reuters writes.

Most US frontier labs are developing their own chips, as is Alibaba and Baidu.

Read the scoop at Reuters.

OpenAI commits to power buildouts for Stargate centers

OpenAI is now addressing local concerns for their massive data center buildouts.
The company shares how it plans to deal with «Stargate Communities» across the USA, especially on water and power consumption — and commit to being «good neighbors.»

This means for instance that they will cooperate with power utility companies at each site to ensure they are «paying our own way on energy.»

That could entail building out the infrastructure for each site where needed, or simply strengthening the grid.

They also address water usage, and note that Stargate Abilene will use as much water annually as the community uses in a day, thanks to «innovations in the cooling water systems design.»

OpenAI is even committing to slowing down workloads on days with adverse conditions, to lighten the load on the grid.

There have been at least 25 data center cancellations in the USA due to opposition from local communities, Gizmodo reports.

They are typically worried about rising electricity and water costs, which OpenAI is now directly addressing.

Read more: OpenAI’s blog, Reuters, Bloomberg.

OpenAI’s latest numbers; 3x yearly financial/compute growth since 2023

Impressive tally; OpenAI shows unprecedented growth in their new numbers. (Picture: OpenAI)
OpenAI is scaling like never before, according to CFO Sarah Friar, who is out with some hard numbers.

Friar says revenue and compute grow in tandem with the advent of more powerful models — that they «scale with intelligence,» so to speak.

Looking at the numbers, compute has gone from 0.2 gigawatts in 2023 to 1.9 GW in 2025, growing about 3x every year since ChatGPT’s debut.

Financially, OpenAI now has $20 billion in revenue, following the same curve as the compute scale and growing about 3x per year from $2 billion in 2024.

All this is of course before ads arrive on the free tier, and before OpenAI sees any results from its global rollout of the go subscription on their revenue.

On the compute side, OpenAI is chasing infrastructure like there is no tomorrow, closing in on more than 30 GW before 2030 in what would be truly explosive growth — and we can only wonder as to how advanced frontier AI models will get by then.

Read more: OpenAI’s announcement, writeups on Reuters, .

Musk runs into a snag on data center gas turbines — they are now illegal

xAI fits gas turbines on flatbed trucks and calls them «temporary» to avoid regulation. No more, says the EPA. (Picture: generated)
Apparently, a new EPA rule on stationary and «temporary» gas turbines for energy generation has made them illegal, according to CNBC.

They produce much too high levels of nitrogen oxides, and must be regulated as combustion engines, the EPA says.

That means Musk’s and xAI’s Colossus plant will have to rethink their energy use, as they make widespread use of natural gas turbines to generate electricity for their facilities.

They will basically have to get Clean Air Act permits, and prove they aren’t harmful.

The local population have long been complaining of a rotten-egg-like stench in the atmosphere, CNBC writes, and smog is supposedly prevalent.

xAI uses 15 turbines for Colossus 1 and at one point had 59 turbines for Colossus 2, The Guardian writes.

Read more: The EPA Rule. Writeups on CNBC, Gizmodo and The Guardian.

OpenAI to spend $10 billion on compute from AI chip startup Cerebras

Cerebras makes powerful inference chips, for when an AI needs to think a little deeper. (Picture: generated)
The deal will land OpenAI with an added power capacity of 750 megawatts — but it’s not just any kind of compute.

Cerebras makes wafer-scale inference chips (for generating the response an AI gives after a query), looping in networking and high-bandwidth memory on the same die.

This makes for much faster thinking on complex tasks, should enable real-time reasoning, and OpenAI could possibly route those kinds of queries to this kind of compute.

The deal is worth over $10 billion, Reuters writes, and the capacity should come online in multiple tranches to be fully delivered in 2028, OpenAI says.

Sam Altman is an early investor in the company, and OpenAI once considered buying it, TechCrunch reports.

Read more: OpenAI’s announcement, writeups on Reuters, CNBC and TechCrunch.

Zuckerberg unveils «Meta Compute» initiative to secure infrastructure

Meta is serious about investing in data centers in the coming years. (Picture: generated)
The Meta CEO announced his new «top level» compute team on threads, saying «Meta is planning to build tens of gigawatts this decade.»

He also said they will expand to «hundreds of gigawatts,» «over time.»

— How we engineer, invest, and partner to build this infrastructure will become a strategic advantage, Zuckerberg continues.

There are lots of obstacles to build global infrastructure like this, and challenges in «operating our global datacenter fleet and network,» Zuckerberg says.

The team will be headed up by Meta’s head of global infrastructure Santosh Janardhan and Daniel Gross, in close collaboration with newly hired Dina Powell McCormick.

Meta hasn’t released a frontier model since the Llama series in April, 2025, which saw lots of controversies. They have since formed a Superintelligence Lab from across the industry to develop the next generation of models. There is no timeline for when they’ll be ready, but the ambition is real, Zuckerberg says:

— [we want to] deliver personal superintelligence to billions of people around the world, he closes his message.

Read more: Zuck’s threads message, Reuters, TechCrunch, Business Insider.

Meta buys into 6 GW of nuclear to power its Prometheus data center

Meta is betting on nuclear for its AI power needs in Ohio, US. (Picture: Adobe)
Three nuclear power companies have been selected by Meta to supply its «supercluster» being built in New Albany, Ohio.

Together, they are offering up to six gigawatts of power from established nuclear reactors to more experimental small modular reactors.

The deals are a result of a request for proposals from Meta in December 2024, TechCrunch writes, and they have now selected Vistra (2.1 GW) for their existing nuclear plants, and Oklo (1.2 GW) and TerraPower (up to 2.1 GW) for their modular reactors.

TerraPower was co-founded by Bill Gates, while Sam Altman is the biggest investor in Oklo.

TerraPower and Oklo are still in the startup phase, with capacity for TerraPower expected to come online in 2032, while Oklo’s «nuclear technology campus» won’t be coming online until 2030, CNBC writes.

No financial details of the deals have been made public, but Meta has committed to spending $600 billion on infrastructure over the next three years.

Read more: Writeups on TechCrunch, CNBC.

Nvidia responds to report that Meta might use Google’s TPU chips

A deal between Meta and Google might see custom TPU chips installed on Meta's data centers, as the industry looks for Nvidia alternatives.
Talks between Meta and Google on custom chips has Nvidia slightly worried, but not much. (Picture: generated)
News on the talks was posted by The Information on tuesday, and caused Nvidia to drop 3% and Google to tease a $4 billion valuation in the markets.

Under the deal being discussed, Meta would start renting compute on Google’s Tensor Processing Units as early as next year, Reuters reports.

Continue reading “Nvidia responds to report that Meta might use Google’s TPU chips”

Cloud honcho at Google says they need to double capacity every six months

Google needs to reach 1000x capacity in the next 4-5 years, they say at an all hands meeting.
Google is not immune to the ever increasing demand on AI infrastructure. (Picture: generated)
The demand for AI infrastructure is «the most critical and also the most expensive part of the AI race,» said Vice President at Google Cloud, Amin Vahdat, at a recent all hands meeting, reported by CNBC.

This comes as almost every other Big Tech company is increasing data center spending and Google has set aside $93 billion in «capital expenditures» this year to do the same.

This will be followed by a «significant increase» in 2026, but likely not matching OpenAI’s enormous $1.4 trillion data center spending.

They are aiming to «spend a lot,» and hit «the next 1000x in 4-5 years,» Vahdat is reported to have said.

That’s one thousand times more «capability, compute and storage networking» that he aims to add for «essentially the same cost, power and energy spend.»

Read more: The scoop at CNBC, writeups at Ars Technica and Gizmodo.

Anthropic partners with Nvidia, Microsoft in $30 billion Azure deal

Anthropic nets a Microsoft investment, and promises to use it all on Microsoft's Azure.
Nvidia and Anthropic will optimize for each other, and use Microsoft’s cloud. (Picture: generated)
The companies will invest some $15 billion in Anthropic, while the AI lab commits to $30B in spending on Azure, Microsoft’s cloud offering.

— We’re increasingly going to be customers of each other. We will use Anthropic models, they will use our infrastructure and we’ll go to market together, Microsoft CEO Satya Nadella said, according to Reuters.

The agreement means Anthropic will optimize their software stack to better run on Nvidia’s hardware, while Nvidia will «optimize for Anthropic workloads.»

Continue reading “Anthropic partners with Nvidia, Microsoft in $30 billion Azure deal”

Google invests $40 billion in Texas data centers

Google will invest $40 billion on three data centers in Texas, along with an energy buildout of some 6 gigawatts and a new solar plant.
Texas Gov. Greg Abbott and Sundar Pichai announcing the news. (Picture: Google)
Google joins the big players in building out staggering AI capabilities, with a new data center in Armstrong and two in Haskell County, near Abilene, hooked up to a solar and battery plant.

— This investment will create thousands of jobs, provide skills training to college students and electrical apprentices, and accelerate energy affordability initiatives throughout Texas, Alphabet CEO Sundar Pichai said, according to Reuters.

The investments will be made through 2027, but Google says nothing of when they will come online.

They will also bring new funding for the power grid to support 6 gigawatts of «new energy generation and capacity» and will support some 1,700 new electrical apprenticeships with Google support.

This makes Texas the second largest data center state in the USA, after Virginia, notes the Texas Tribune.

— They say that everything is bigger in Texas — and that certainly applies to the golden opportunity with AI, Pichai said.

Read more: Google’s blog, writeups on Reuters, Texas Tribune.