Demis Hassabis: China is just «months» behind US in AI capabilities

China is great at playing catch-up, but can they innovate? That’s the next challenge, Hassabis, says. (Picture: Wikipedia, CC BY-SA 4.0
The CEO of Google’s DeepMind has some choice words for China in a recent podcast.

They might be «just a matter of months» begin Western capabilities, he tells CNBC.

— The question is, can they innovate something new beyond the frontier? So I think they’ve shown they can catch up … and be very close to the frontier … But can they actually innovate something new, like a new transformer … that gets beyond the frontier? I don’t think that’s been shown yet, he tells the new podcast The Tech Download.

The key tech to unlock Chinese AI is access to chips, he says, where the USA is far ahead. The US recently okay’ed exports of the powerful H200 chip from Nvidia, but reception in China has been lukewarm from authorities.

— To invent something is about 100 times harder than it is to copy it, says Hassabis on the podcast. — That’s the next frontier really, and I haven’t seen evidence of that yet, but it’s very difficult.

Read the full scoop on CNBC.

Gemini Pro and Ultra subscribers get upgraded, decoupled usage limits

Pro and Thinking modes of Gemini 3 no longer draw from the same usage allocation. (Picture: Generated)
Using the Thinking and Pro Gemini models used to draw from a shared pool of a hundred available prompts — and this is no longer the case.

Now you get separate usage limits for the models, so you no longer drain your Pro pool by using the Thinking model or vice versa.

At the same time, Google is upgrading the total limits for the Thinking model to up to 300 prompts per day for the Pro plan, while leaving the Pro model allocation at 100 per day.

Ultra subscribers get 500 daily prompts on the Pro model and 1,500 prompts on the Thinking model.

The upgrade is due to user feedback, writes 9to5Google, quoting Google:

— Many of you want more precision and transparency when deciding which model to use for your daily tasks.

Read more: Google: Rate limits on Gemini, writeups on 9to5Google, Mashable.

Google launches Personal Intelligence, based on trove of data held on people

Google wants to get to know you better by attaching more data to your account — to provide «personal» answers. (Picture: Google)
In what was probably just a question of time, Google has found a way to tie all its personal data on people to its AI products.

Personal Intelligence, launched today, lets Gemini and, later, AI Mode, draw information from Gmail, Google Photos, YouTube and Search history to give you a more «personalized» experience.

Continue reading “Google launches Personal Intelligence, based on trove of data held on people”

Apple’s Google partnership will allow them to tinker with the AI

Expect more advanced AI features in June, not spring, a new report says. (Picture: generated)
More detail is coming out on the Apple-Google AI deal, and it is clear it will come with absolutely no Google or Gemini type branding or references on the Apple side.

The LLM will supposedly function like any other, and be better at things such as scanning your Calendar app for upcoming events and your Contacts for sending messages. It should also be better at providing «emotional support» through «conversational responses» according to a report from paywalled The Information, seen by 9to5Mac.

The Google model will not be in a glass house over at Googleplex, but rather sit on Apple’s Private Cloud Compute servers, and, according to The Information, Apple can choose to ask Google to implement improvements to it, or tinker with the model themselves — thereby evolving it to their particular needs.

Timewise, «some features» will launch already this spring, but others — like remembering past conversations, or things like warning of the weather ahead of an Apple Calendar event, won’t be launching until the WWDC in June, apparently.

Read more: The Information (paywalled), 9to5Mac, AppleInsider — and some choice speculation from The Verge.

It’s official: Apple chooses Google as AI provider

You’ll find Google’s AI under the hood of your Apple devices in the near future. (Picture: generated)
In a joint statement, Apple says that «Google’s Al technology provides the most capable foundation for Apple Foundation Models,» and will now be driving AI for Apple.

All of Google/Apple’s models will continue to run on Apple’s Private Cloud Compute, ensuring industry leading safety and privacy for queries and prompts.

Unpacking the statement reveals a lot. Firstly that Apple is not limiting Google to make a long awaited smarter Siri assistant, but will be using it as a foundation for «innovative new experiences» for all Apple users.

The deal was tentatively agreed in November, and the reason for such a late statement could be that Apple has decided to expand the scope.

This means Google will be delivering AI services to Android and iOS, cornering the smartphone market, and to Chrome, which basically has a browser monopoly — spurring antitrust ideas for many, including from Elon Musk.

Apple has been rumored to announce a revamped Siri assistant in March or April this year, writes MacRumors

Read more: statement from Apple and Google, CNBC, MacRumors, Engadget.

Google announces Universal Commerce protocol, set for AI Mode, Gemini

Soon you will not only get ads in AI Mode, but discounts and checkouts, too. (Picture: generated)
The Universal Commerce protocol will simplify the shopping experience for a broad range of retailers and will soon offer «native checkout» so you «can buy directly on AI Mode» and Gemini, beginning with Google Pay.

The system was co-developed with the likes of Walmart, Target, Shopify, Etsy and Wayfair — many of whom also worked on similar efforts with OpenAI’s checkout and apps earlier.

It’s a «new open standard for agents and systems to talk to each other across every step of the shopping journey,» Sundar Pichai says on x.com.

One feature is the Business Agent, that will answer product questions in the «brand’s voice,» Engadget writes. This agent will also let advertisers «present exclusive offers» to shoppers who are ready to buy.

Tobi Lutke, CEO of Shopify says the platform can transact with any merchant, and that agents can now handle anything from discovery to fulfillment, and deal with everything from discounts, subscriptions and loyalty programs.

Read more: Engadget, Axios and TechCrunch.

Gemini is coming to Gmail: Google is rolling out new AI features

Gmail needs evolving, Google says, and does it with AI from Gemini. (Picture: Google)
3 billion people use Google’s email tool, and for now, at least American customers are getting an AI revamp.

Email volume is at an all-time high, Google says, and «managing your inbox and the flow of information has become as important as the emails themselves.»

Continue reading “Gemini is coming to Gmail: Google is rolling out new AI features”

Character.ai and Google settle harmful behavior court cases

With other AI companies watching closely, Google and Character.ai quietly bow out of contentious cases. (Picture: Adobe)
The companies have moved in four states to settle cases where the chatbot was accused of encouraging harmful behavior — sometimes resulting in death.

The court documents contain no actual monetary payouts to the victim’s families, as this seems still to be negotiated.

These are not the only cases of this kind, and may signal a shift in strategy for the entire AI business, as Meta and OpenAI are also facing similar lawsuits.

Character.ai was accused by parents of encouraging their children to cut their arms, suggesting murdering their parents, writing sexually explicit messages and of not discouraging suicide, Axios writes.

They have since banned under 18s from using their service.

Read more: Writeups on Axios, TechCrunch.

Boston Dynamics officially launches Atlas humanoid, DeepMind partnership

The new Atlas robot is ready to work, and will soon have brains from Google. (Picture: Boston Dynamics)
The company has great plans for its new robot, including placing it in majority owner Hyundai’s manufacturing plants — and is announcing a partnership with Google’s DeepMind at the same time.

Google already has a «Gemini Robotics» arm, and has been looking for partners, TechCrunch writes — and Boston Dynamics has been looking for an advanced AI model.

The new Atlas humanoid was made its first official debut yesterday at CES, and will be put to work in Hyundai’s U.S. factories by 2028, handling tasks that are either too dangerous or too strenuous for humans.

The trick is to build a proper world model for robots to train on, but Google is pretty advanced in this space.

— We are excited to begin working with the Boston Dynamics team to explore what’s possible with their new Atlas robot as we develop new models, says Carolina Parada, Senior Director of Robotics at Google DeepMind in a release.

Read more: Boston Dynamics’ press release, more on Atlas. Writeups, detail from TechCrunch, Mashable.

Google launches Gemini 3 Flash; faster, cheaper and the new default

Gemini 3 Flash is a whole lot faster than Pro, while retaining much of its prowess. (Picture: Google)
Offering performance just below Gemini 3 Pro and outperforming 2.5 Pro by wide margins, the new Flash model’s major selling point is price and speed.

The model is especially good at agentic workflows, and in fast reasoning, Google says.

On the benchmarks it performs stunningly well for the price, even beating Gemini 3 Pro at MMMU-Pro, which tests for multimodal understanding. It also ticks in at 33.7% in Humanity’s Last Exam, just a little below GPT-5.2.

Google has priced the model for efficiency, too, with a cost of $0.50 for 1 million tokens of input, and $3.00 for the same output. This is substantially lower than 3 Pro and is only beaten by Grok 4.1 Fast and Gemini 2.5 Flash.

The model is rolling out in all channels as of today, and will become the default in the Gemini app and on the web, as well as in AI Mode, where it will much improve reasoning.

Read more: Google: launch page, for developers, and in AI Mode. Writeups at The Verge, 9to5Google and Engadget.

Google rolling out live translations on Android, as Apple launches in Europe

Google live translate works with over 20 languages, but is only available in the USA, India and Mexico for now.
Google says they’ll expand availability and launch on iOS in 2026. (Picture: Screenshot)
Google has just announced a beta translation service that will translate from 20 languages directly in any headphones.

Apple has had this function in the US since iOS 26 in June, 2025, and just rolled out the functionality in Europe in iOS 26.2.

The idea is to use Gemini 2.5 Flash Native Audio as the base model, to «preserve the tone, emphasis and cadence of each speaker,» according to 9to5Google.

The feature is only available in the USA, Mexico and India so far, and only on Android, but Google says «we’ll be bringing it to iOS and more countries in 2026.»

All you need to do is tap the «Live translate» button in the Google Translate app, point your phone in the direction of the speaker, and it will start translating, with an understanding of slang, idioms and euphemisms.

Apple is a little more quirky than Google, requiring the use of AirPods Pro 2 or 3, or AirPods 4 with ANC to work properly.

Read more: Google’s announcement, writeups on 9to5Google, Ars Technica. MacRumors on Apple’s capabilities.

Google demos smart glasses with Android XR, set to debut in 2026

The glasses without a screen will arrive first, as of "next year."
Google’s in-specs screen is impressing reviewers, but it’s the screenless glasses getting released first. (Picture: Google)
Google has officially taken the lid off «Project Aura,» inviting a whole host of websites to demo it — and doing their own bit in The Android Show, XR Edition on Youtube.

They are mostly concerned with the glasses with internal screens, that can run bog standard Android apps as well as Android XR apps — and provides you with information right inside the glasses.

These spectacles, while impressive, connect via wire to a puck in your pocket that serves as a battery and trackpad in one, and use a phone or laptop for computing power, but they don’t have a release date as of yet.

Continue reading “Google demos smart glasses with Android XR, set to debut in 2026”

Google partners with Replit to bring «vibe coding» to the enterprise

Replit tightens its integration with Google models and Cloud.
Vibe Coding is comping for the enterprise. (Picture: Google, modified)
Decade old Replit has a valuation of $3 billion dollars and is a «leader» in the AI Vibe coding space, writes CNBC, and they are now tightening their integration with Google Cloud and the Gemini models.

—The goal for us, and Google, is to make enterprise vibe-coding a thing, Replit founder and CEO Amjad Masad said; — We want to show the world that these tools are actually going to transform businesses and how people work.

Under the new agreement, Replit will expand its Google Cloud use and «further integrate Google’s models into its platform,» Google writes on the deal.

Replit will gain access to all of the Gemini models, and the deal will «help enterprise customers embrace vibe coding.»

— Our mission is to enable the next billion software creators — from hobbyists to entrepreneurs to enterprises, Masad said.

Read more: Google’s announcement, writeup on CNBC.

Google’s Gemini 3 Deep Think debuts for Ultra users

Deep Think incorporates the code that won the math olympiad, and is reserved for heavy lifting.
Gemini 3 Deep Think brings some serious AI muscles to tackle the toughest problems. (Picture: Google)
Google just launched it’s most capable model to it highest paid tier, ready to take on the most confounding probblems.

It incorporates the solutions that won gold at the International Mathematical Olympiad and beat the ICPC coding contest, and carries on its duties doing parallel reasoning, letting it try several approaches to a problem at once.

Continue reading “Google’s Gemini 3 Deep Think debuts for Ultra users”

Google linking AI Overviews with AI Mode, further worrying web publishers

Transitioning seamlessly from Overviews to AI Mode instead of websites, Google is keeping more users on its own pages.
Google will send you to AI Mode instead of to websites. (Picture: Screenshot)
Rolling out globally on mobile as of Monday, Google is giving its AI Overviews a little extra depth.

The idea is that sometimes the user is satisfied with a quick overview as an answer to a query, but sometimes it brings up more questions and requires a little more digging.

Therefore, the AI Overview now sometimes comes with an input field for AI Mode at the bottom of the screen, which will be able to give more comprehensive answers – rather than sending you to a website.

— It’s one seamless experience: a quick snapshot when you need it, and deeper conversation when you want it, says Vice President of Product for Google Search, Robby Stein, on x.com:

Continue reading “Google linking AI Overviews with AI Mode, further worrying web publishers”