Picture: GoogleApart from blowing up the benchmarks, Gemini 3 takes pride in telling you «what you need to hear, not just what you want to hear.»
— Like the generations before it, Gemini 3 is once again advancing the state of the art, says CEO Sundar Pichai on their launch page, and adds: — In this new chapter, we’ll continue to push the frontiers of intelligence, agents, and personalization to make AI truly helpful for everyone.
Debuting in preview across all of Google’s services, including AI Mode on their front page, the new model is «another big step on the path toward AGI,» Google says.
Texas Gov. Greg Abbott and Sundar Pichai announcing the news. (Picture: Google)Google joins the big players in building out staggering AI capabilities, with a new data center in Armstrong and two in Haskell County, near Abilene, hooked up to a solar and battery plant.
— This investment will create thousands of jobs, provide skills training to college students and electrical apprentices, and accelerate energy affordability initiatives throughout Texas, Alphabet CEO Sundar Pichai said, according to Reuters.
The investments will be made through 2027, but Google says nothing of when they will come online.
They will also bring new funding for the power grid to support 6 gigawatts of «new energy generation and capacity» and will support some 1,700 new electrical apprenticeships with Google support.
This makes Texas the second largest data center state in the USA, after Virginia, notes the Texas Tribune.
— They say that everything is bigger in Texas — and that certainly applies to the golden opportunity with AI, Pichai said.
You can now find just the right product and comparison shop in AI Mode. (Picture: Google)Google already has a massive database of some 50 billion product listings, that you might have tapped into when shopping for products using Google Search.
Now that data is being combined with AI Mode, giving you granular detail in natural language conversations to find just the right product.
People have been wondering about how to crack monetizable shopping features for a long time, and Google has been teasing ads in AI products for a while. This might be a first step.
A better way to shop?
The new feature will give you «rich visuals and the details you need,» and you can dig into the results or bring up products side by side for comparison.
The old SIMA was good at following instructions, but the second version now has access to Gemini models and can explore 3D worlds on its own, with zero advance training.
That’s great for video games, where it can think and perform complex reasoning around its goals.
Learning from concepts
It can also learn across games, taking cues from «mining» in one game and transferring it to «harvesting» in another, meaning it can iterate and get better over time.
You can now use natural language editing of your Google Photos, and get instant results — thanks to Nano Banana. (Picture: Google)Google has announced a whole slew of AI features for the Photos app — bringing it up to date with their latest «conversational» image generator.
You can now ask the app to remove sunglasses in photos or fix a smile, but it can also respond to names you have tagged in your pictures, such as «make Engel smile.»
«Help me edit»
You can use the «Help me edit»-button in the editor and simply describe the style you want your pics to be in, from a Renaissance portrait to a picture from a children’s story book.
Expanding from on-device AI, Google’s new tech will provide the same level of privacy. (Picture: Google)Similar to Apple’s Private Cloud Compute, Google’s solution is to enhance apps on your phone, Chromebook or whatever else you are using, with extra power from Gemini in the cloud.
Most of Google’s AI features are handled on-device, but they are seeing the need for more computing power to move «from completing simple requests to AI that can anticipate your needs with tailored suggestions or handle tasks for you at just the right moment.»
The connections between the device and servers are encrypted, and the data transmitted is not available to anyone’s prying eyes — not even Google’s.
The new tech is not getting a wide rollout, and most AI queries will still be on-device. The only feature to use it is the Magic Cue in Android, and the Recorder app, that will be able to summarize transcripts in «a wide range of languages.»
This is foundational technology for Google, and they will be rolling out features across their services in short order, saying «This is just the beginning.»
Robby Stein, VP of Product for Google Search comments on advertising in Google’s AI. (Picture: screenshot).In a wide-ranging interview on the podcast Silicon Valley Girl, Robby Stein, VP of Product for Google Search is positive about how advertising could get even more granular with all the extra information people can provide in their AI products, saying;
Ads not going away
— I don’t see them [ads] going away. The way people are using Google Search isn’t really changing, what is happening is that it’s expanding [with AI services], he opines.
Data center deals are flourishing and none of the big tech spenders feel they can afford not be in the race. (Picture: Adobe)Quarterly results are in for Microsoft, Alphabet and Meta — and while the numbers are mixed, they all agree on big capital expenditures — needed for building data centers — for fiscal year 2025.
The coming AI wars will be fought with data centers and gigawatts, and nobody wants to lose out.
Google has made «extraordinary progress» and are getting ready to debut Gemini 3.0 soon. (Picture: screenshot)After many a rumor and speculation as to when Google would reply to OpenAI’s GPT-5 — we now have proof right from head of Google himself.
In a sitdown/interview with Salesforce CEO Marc Benioff at the Dreamforce 2025 conference, that goes long on the future of AI, cloud and innovations, he let this qoute rip:
— We kickstarted Gemini, we brought Google brain and Google deep mind together, and we’ve been rapidly iterating since then, he tells Benioff, then adds some forward looking comments:
One of the questions on the exams is calculating the distance of quasars. (Picture: screenshot)Scientists and judges from the International Olympiad on Astronomy and Astrophysics (IOAA) have given five top AI models a run through the exams from 2022 to 2025 — and top scores were awarded for the models from OpenAI and Google.
The IOAA is a top rated exam for global high school students and is held annually with some 300 participants from 64 countries, and consists of questions to demonstrate deep conceptual understanding, multimodal analysis and multi-step derivations.
Google’s CodeMender wont be released just yet, but is being offered to «critical open source software.» (Picture: Google)It’s been a busy 24-hour stretch for Google, launching two new models and an expansion of AI mode into Europe.
The CodeMender model is based on Google’s own research into finding zero-day vulnerabilities in computer programs, where they found lots of exploits, and stipulated that humans would struggle to keep up with AI scanners once implemented widely.
While others are still struggling with the AI-based browser, Google is going all-in with Chrome. (Picture: Google)
Google goes nuclear; brings Gemini to Chrome
While OpenAI is still working on a browser and others are cautious or have failed to take off, Google is done waiting. They are now building the Gemini assistant directly into the world’s most popular browser. «Gemini with Chrome» will navigate and summarize your tabs for you, offer helpful suggestions in the URL bar, and should soon help you order stuff online. It can even find your closed tabs and search for references inside Youtube videos. It’s rolling out to Mac and Windows users with language set to English as of this writing. They call it «a new era of browsing.» More at Google’s launch page, Google’s overview and launch thread.
Hands-on with Meta’s new Ray-Bans
Has Meta found the Goldilocks zone of smart glasses? Their recently launched Ray-Bans with an internal screen seems to have hit the sweet spot with reviewers. The Verge calls them the best smart glasses out there, Tom’s Hardware says it «feels like the future,» and Gizmodo writes that you’re going to want a pair. The consensus seems to be that the in-lens screen is quite useful, just about bright enough and it hits the sweet spot with the new wristband. More at Mashable’s roundup.
OpenAI says they will now focus on scientific discovery. (Picture: OpenAI)ChatGPT solved all 12 of 12 problems in the 2025 International Collegiate Programming Contest (ICPC) — an algorithmic programming contest for university students.
That result would have given it first place if it were human, as the best college teams only solved eleven.
Google also participated with a custom Gemini 2.5 Deep Think and earned Gold status, solving 10 of the problems and finishing second, Google claims.
Speaking for the entire media industry, Penske says AI Overviews are creating havoc on their business model. (Picture: screenshot)Penske Media Corporation claims Google is siphoning off traffic to their websites and stealing their content with the overviews feature.
This is the first major publisher suing Google for the feature, as research from Pew shows that less than one percent of users click on from links that have an AI Overview on the results.
20% drop in traffic
PMC, which is also the parent company of Variety and The Hollywood Reporter, has seen traffic drop by some 20% and affiliate revenue decline 30% since Google’s overviews started on their stories.
Veo 3 is getting some massive API updates today. (Picutre: Screenshot).In a big day for video generation at Google, the Veo 3 generator finally gets ready for Tiktok and Reels — while also hitting «general availability» in the API, according to a new blog post.
Those are the exact words that Google uses for Gemini 2.5 Flash to describe that you get as much as you can use, so they might be hinting here that there are no usage limits on the API access.
Previously, Gemini Pro users would only get three generations per day, and Ultra would get five. But if you pay as you go in the API, you might get as much as you can chew.