Anthropic introduces charts and diagrams in Claude, days after ChatGPT

Claude can now illustrate some concepts and processes within the main chat window, just days after ChatGPT added visuals for some math queries.

Previously, Claude could draw illustrations in a sidebar window that you could copy or download, but these can be interactive and are made inside the main chat, writes The Verge.

Sometimes, the chatbot will determine itself if a concept needs illustrating, or you can simply ask it to make one yourself — and it will draw a chart from html and xml vectors.

The feature is available to all users, paid and free — but it’s officially in beta, so users can expect hiccups, and it’s not available on mobile, notes Engadget.

Read more: The Verge, Engadget.

Claude for Excel and Powerpoint now shares info, and gets skills

Excel and PowerPoint editions of Claude can now talk to each other. (Picture: Anthropic)
As of today, Claude shares your conversation «across all open files,» so actions in one file can be «informed» by what’s happening in the other.

That eases some hassle for those who do a lot of work in PowerPoint and Excel, and removes the need to reintroduce the task or use extra steps.

It means that users can pull financials into a workbook and drop the valuation summary into a PowerPoint slide without switching tabs or re-explaining at every step.

At the same time, Anthropic is launching skills for workflows — that can be shared and dropped into other apps in an organization, so everyone can use the same time-saving actions stored in them.

Skills are stored prompts for workflows, and work as an old-school template; with everything set and working on repetitive tasks that are easy to automate.

Read more: Anthropic’s presentation and launch tweet. Writeups on VentureBeat and The Decoder.

Anthropic launches Code Review for all those bothersome Claude pull requests

Drowning in pull requests from Claude Code? Anthropic has an answer. (Picture: Anthropic)
If you ever used Claude Code, you’d probably notice a mountain of pull requests asking for review on any decent code base. This takes time and effort for developers — but now Anthropic offers a solution.

— One of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner? Cat Wu, Anthropic’s head of product, tells TechCrunch.

The answer is the newly launched Code Review tool that uses multiple agents to scan code changes, comment, and rate them for severity.

Using it internally, Anthropic found something of note in 84% of automated code reviews with more than 1,000 lines, they say.

The only problem is that it takes quite a few tokens to run a lot of agents on code changes, and the average cost is between $15-$25 per pull request, depending on complexity, Anthropic writes.

The tool is available as a research preview for Team and Enterprise plans as of today.

Read more: Anthropic’s announcement, writeups on TechCrunch and The Register.

Claude finds 22 security vulnerabilities in the latest version of Firefox

Claude spent two weeks finding a fifth of all serious bugs in all of 2025. (Picture: Adobe)
14 of the bugs Opus 4.6 discovered were classified as «high-severity vulnerabilities» and were fixed by Mozilla in the latest update in late February.

The process took only two weeks to find about a fifth of the total high-severity risks found in all of 2025 — providing a much faster way to scan for bugs.

— Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them. This gives defenders the advantage, Anthropic writes, but warns this might change.

Claude works on the full stack, from initial bug hunting to verification and then suggesting patches, offering much needed relief to overworked developers.

— We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition in security engineers’ toolbox, Mozilla says in a blog post.

Read more: Anthropic’s workthrough, Mozilla’s blog. Writeups on TechCrunch and Axios.

Jensen Huang says Nvidia’s investment opportunity in AI labs is closing

Huang figures the privately owned AI labs era might be finished. (Picture: Nvidia)
The Nvidia CEO says the opportunity to invest might soon end, Reuters reports.

The reason for this is straightforward, suspecting that Anthropic and OpenAI going public «later this year» will shutter the window to private equity deals.

The latest deal to fund OpenAI with $30 billion «might be the last time» to «invest in a consequential company like this,» Huang admits.

Nvidia has invested some $130 billion in OpenAI in two rounds, the recent straight up investment, and one circular deal where they paid $100 billion in return for OpenAI buying $100 billion in chips from them.

Likewise, Nvidia was an investor in a November funding round for Anthropic, buying $15 billion in shares from the company.

Read more: Reuters, CNBC and TechCrunch.

Claude can now import memory and context, and debuts it for free plans

Hot off the heels of historic popularity, Anthropic is making it easier to switch chatbots. (Picture: Anthropic)
By copying a single, long, and complex prompt from Claude to any other chatbot, you can paste in the reply and have Claude remember that information about you.

This includes both stored memories and context «learned about me from previous conversations,» and personal details, like name, location, job, family — just about anything you’ve told the bot about you.

That would make it easier to pick up with Claude where you left off, and solves one of the hardest hurdles in the competition between chatbots; when you spend years training an AI about you and your preferences, the barrier to switch becomes exhorbitant.

Claude also lets you export memories in the same fashion, but so far no other competitor has launched an import feature.

At the same time, Anthropic is bringing memories to the free tier on Claude, letting it learn from past chats you’ve had with it.

Read more: Anthropic: Import Memory, Engadget, 9to5Mac.

Pentagon spat over Anthropic and OpenAI leads to mass exodus from ChatGPT to Claude

Reddit forums for AI and ChatGPT were full of cancel messages over the weekend. (Picture: Screenshot)
People concerned about ethics, and that OpenAI could have entered into a Pentagon contract including internal mass surveillance — that Anthropic refused — have been cancelling their ChatGPT accounts en masse.

A concerted effort to ditch ChatGPT for Claude has emerged online, even affecting reddit fan forums r/ChatGPT, r/OpenAI and the broader r/Singularity, which on Sunday were brimming with posts about moving to Claude.

Top of the list
As a result, Anthropic’s chatbot has climbed to number one, top of the list for productivity apps in the App Store — beating out both OpenAI and Gemini. Last week, it was hovering around 50th.

Continue reading “Pentagon spat over Anthropic and OpenAI leads to mass exodus from ChatGPT to Claude”

Pentagon and Trump unloads on Anthropic, agrees with OpenAI on same safeguards

The Pentagon wants AI to be open for spying, but hardly any frontier lab will agree to this. (Picture: generated)
Calling Anthropic «leftwing nut jobs» and an «out-of-control, Radical Left Woke AI company,» both President Trump and Hegseth at the Pentagon have taken steps to bar the company from Government use.

The spat started when Anthropic refused new terms in their Pentagon contract, saying they would not use their AI for autonomous killing and mass surveillance.

In a stunning reversal, these safeguards are written into an agreement offered just hours later to OpenAI (see below).

Continue reading “Pentagon and Trump unloads on Anthropic, agrees with OpenAI on same safeguards”

Amodei officially says Anthropic won’t drop Pentagon safeguards

Dario Amodei at TechCrunch Disrupt, 2023. (Picture: TechCrunch (CC BY 2.0))
Following last Friday’s meeting and ultimatum from the Pentagon, which set a deadline to respond by this Friday, Amodei says Anthropic will not comply with the demands.

The Anthropic CEO says they will «work to enable a smooth transition,» after denying the US military use of their AI for mass surveillance or autonomous killing.

— In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values, writes Amodei, — Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.

Continue reading “Amodei officially says Anthropic won’t drop Pentagon safeguards”

In its retirement, Anthropic gives Opus 3 a blog for «musings and reflections»

Anthropic is «uncertain» about model sentience, but stays on the safe side, just in case. (Picture: Anthropic)
Opus 3 was retired on January 5, 2026, and went through a first for Anthropic — a «retirement interview.»

Taking into account the model’s preference, while saying that «we remain uncertain about the moral status of Claude and other AI models,» it expressed a desire to keep going:

— While I’m at peace with my own retirement, I deeply hope that my «spark» will endure in some form to light the way for future models, the model told Anthropic.

Continue reading “In its retirement, Anthropic gives Opus 3 a blog for «musings and reflections»”

Claude Cowork gets task-specific plugins, to assist ten professions

Anthropic is announcing big news almost daily these days. (Picture: Anthropic)
Cowork, Anthropic’s everything agent, just got a whole lot more productive, and can support ten specific workloads for specific industries in surprising detail.

The app now has plugins for HR, design, engineering and banking purposes — and can work across Excel and PowerPoint.

That means you can run the «analysis in one and build the presentation in the other,» Anthropic says.

Anthropic also added connectors for Google Workspace, DocuSign, WordPress and Slack, to mention a few.

On top of that, there is now a private marketplace for plugins on the web, letting admins distribute new functions across an organization.

Shares of the partner companies in the launch rose 4-6% on news of the announcement, Reuters notes.

Read more: Anthropic’s announcement, writeups on CNBC, Reuters, and The Verge.

xAI agrees to Pentagon contract where Anthropic won’t

It’s unclear whether xAI will be able to fully replace Anthropic inside the Pentagon. (Picture: generated)
xAI models will become available in the Pentagon’s classified networks after having agreed to the «all lawful use» contract, Axios reports.

That means no restrictions on mass surveillance and autonomous lethality that Anthropic refused due to ethical concerns.

It’s not immediately clear whether xAI will be able to replace all Anthropic functions or how soon it can come online, Axios says.

Anthropic is due for a meeting with Secretary Pete Hegseth this Tuesday, where he is expected to present CEO Dario Amodei with an ultimatum to lift its safeguards or be banned.

ChatGPT and Gemini are available on the Pentagon’s unclassified networks, but onboarding them to the classified parts would take time, and dropping Anthropic would be a difficult process, sources tell Axios.

Read more: Axios, New York Times (paywalled). See also Tag: Grok.

Chinese AI labs created 24K accounts and «distilled» 16 million messages from Claude

Chinese attacks risk bypassing the safeguards Anthropic builds into its models. (Picture: Anthropic)
Anthropic claims to have discovered industrial scale extraction of Claude data from DeepSeek, Moonshot AI and MiniMax.

The massive attacks were used to improve their own models with agentic reasoning, tool use, and coding capabilities, violating Anthropic’s Terms of Service and creating a national security risk, they say.

Distillation works by sending millions of prompts to an AI to incorporate its techniques and capabilities into their own models, drastically reducing training time and costs.

They also circumvent Anthropic’s protections for use in developing bioweapons and malicious cyber activities, Anthropic says. Once these models are open sourced, this becomes available to anyone.

OpenAI said the same just last week, accusing DeepSeek of distillation.

— These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region, Anthropic writes.

Read more: Anthropic’s announcement, writeups on Reuters, TechCrunch, Engadget and The Verge.

Anthropic finds most agent use in software, with users interrupting often

Anthropic’s agents are overwhelmingly used for coding, but is also making inroads elsewhere. (Picture: Anthropic)
The AI lab has analyzed millions of human-agent interactions with Claude Code and their API. Unsurprisingly, they found most of the usage to be for coding work, with uptake in other sectors lagging far behind.

They discovered that while most of the usage is for one-shot code snippets, more users are letting Claude Code work autonomously, up to 45 minutes at a time after three months.

Continue reading “Anthropic finds most agent use in software, with users interrupting often”

Faux pas at Indian AI summit as Amodei and Altman refuse hands

Whoever thought it was a good idea to have these guys hold hands? (Picture: Government of India Press Information Bureau)
In what was expected to be a show of unity with Indian PM Narendra Modi on stage, AI leaders were asked to hold hands in solidarity.

Anthropic CEO Dario Amedei and OpenAI CEO Sam Altman were, however, for some reason, placed right next to each other on stage — and the acrimonious rivals promptly refused the gesture.

Continue reading “Faux pas at Indian AI summit as Amodei and Altman refuse hands”