
— Like the generations before it, Gemini 3 is once again advancing the state of the art, says CEO Sundar Pichai on their launch page, and adds: — In this new chapter, we’ll continue to push the frontiers of intelligence, agents, and personalization to make AI truly helpful for everyone.
Debuting in preview across all of Google’s services, including AI Mode on their front page, the new model is «another big step on the path toward AGI,» Google says.
It’s a true multimodal model, Google says, treating all text, images and audio natively, for example letting you take pictures of old recipes and create a cookbook out of them.
It currently tops the LMArena leaderboard with a good margin, crushing it on Humanity’s Last Exam with 37,5% without tool use, versus GPT-5.1’s 26.5%, and scores 31.1% on ARC-AGI-2, almost doubling GPT-5.1’s score.
Equally impressive is the new Gemini 3 Deep Think, scoring 41% on Humanity’s Last Exam and 45.1% on ARC-AGI-2. This model is just being released to safety testers before being released to Google AI Ultra subscribers.
Gemini 3 gives you «new ways to understand information and express yourself,» Google says, by presenting rich visuals, layouts and interactive tools and simulations on the fly based on your query.
Google also says it’s the best agentic coding model they’ve ever built, scoring almost 100 points higher than GPT-5 in WebDev Arena at 1487 points.
There is also an agent interface that can «take action on your behalf in complex, multi-step workflows» — like booking services or organizing your inbox, but this seems limited to Google AI Ultra subscribers.
Gemini 3 is being released «for everyone» in the Gemini App, and for Pro and Ultra users in AI Mode. It is also rolling out to the Gemini API in AI Studio and their new developer program Antigravity.
Read more: Google’s launch page, The Verge and Ars Technica