Quick Friday news roundup: Opus 4.1, Grok undresses Taylor Swift, and more

Opus 4.1 is said to be big jump in performance, but doesn't quite reach the top of the pack.
Anthropic’s Opus 4.1 is very close to the state of the art, and many users are claiming it’s way better than 4.0. (Picture: Anthropic)
Anthropic announces Claude Opus 4.1
In an incremental update that got lost in this week’s headlines, Opus has been «improved across most capabilities» relative to the 4.0 version. It now scores 74.5% on SWE-bench Verified, almost as good as GPT-5. Windsurf says the performance gains are similar to going from Sonnet 3.7 to 4. It’s available now and costs the same as Opus 4.0. Users are also noting a significant improvement.

Google says people are still clicking
After a Pew Research report said users are less likely to click on from AI Overviews in Google, the entire publisher scene erupted and saw doom and gloom on the horizon. They were already seeing fewer clicks from Google in their logs. Now, Google is trying to counter with a happy blog post claiming average click quality has actually increased, and that they are in fact sending more «quality clicks» to publishers than before. Not stats, studies or other underpinning for that, though.

Grok launches Imagine, with «Spicy» mode
Imagine lets you generate or upload innocent little pictures and run them through a «spicy» mode that will partially undress them or put them in suggestive lingerie. This is precisely what The Verge managed to with Taylor Swift, resulting in a generated, semi-nude video. There are serious ethical questions around the new feature, as this could likely be used on anyone, producing intimate deepfakes. Grok itself says it was a slight bug in the system that has since been fixed.

Google counters OpenAI’s Study Mode with Guided Learning
Informed by «years of research and partnerships with educators, pedagogical experts and students,» Guided Learning lets students or the curious learn subjects through an iterative process, piece by piece, and through interaction and discussion with the LLM. It’s supposed to go from «quick answers to deep understanding» and let students «develop their own thinking by guiding them with questions.» It provides multimodal responses and quizzes to determine your level, and provides training based on that. Here’s teknotum on OpenAIs Study Mode.

Reddit AMA with OpenAIs GPT-5 team this morning (PST)
Even Barack Obama did a Reddit Ask Me Anything post back in it’s heyday, and it’s a useful way to reach out to the public, so long as you are prepared for the usual off-kilter and slightly weird questions. Now OpenAI is doing one at 11:00 PST / 20:00 CET on Friday, with Sam Altman leading the pack of no more than eight researchers from the GPT-5 team. Be prepared for the odd really good question in the pack, a bit of weirdness and for the OpenAI team to carefully choose the questions they want to answer. You can already see the questions adding up for the main event here.