Claude Code and Cowork get computer use agent, works with phone

Code and Cowork from anywhere on your mobile phone; they now seamlessly hand off tasks. (Picture: Anthropic)
Anthropic’s most popular apps can now spin up an agent to use your computer to complete tasks — and you can even start it from your mobile.

Available as a research preview for Pro and Max subscribers, it will identify what tools it needs to complete a task, and then ask for connectors to, say, the Finder on the Mac or Chrome.

Anthropic warns that the feature is «still early» and can make mistakes, as well as having vulnerabilities to threats. It can also be slower than doing the thing yourself.

The feature works especially well with Dispatch, Anthropic says, a tool released last week to let you start a task from your mobile and finish it up on the computer.

With it, you can get Claude to check your emails in the morning, or pull updates from spreadsheets, or «spin up a Claude Code session» directly from your phone.

Read more: Anthropic’s announcement, Anthropic on Dispatch, and Engadget.

Labs are hiring experts to protect against «catastrophic misuse»

As their models grow more capable, so is the potential for WMD misuse — and AI labs want to be ahead of the curve. (Picture: Adobe)
Anthropic is hiring a weapons expert, the BBC reports.

The role is for someone with long, PhD level experience in «chemical weapons and/or explosives defence,» the LinkedIn post says.

It would be helpful if the person has an «understanding of radiological materials,» the posting goes on, and says the candidate will be «tackling critical problems in preventing catastrophic misuse.»

OpenAI is not far behind in worrying about these issues, and also has a job post open for much the same, but they are looking for someone with machine learning experience from red-teaming in order to safeguard their AI’s responses.

Using any AI for developing these kinds of weapons is of course against all the labs’ terms of use, but as the models grow more capable, they also need more safeguards.

Read more: Anthropic’s job post, OpenAI’s job post, writeups on the BBC and Mashable.

Anthropic surveys 81,000 people in 159 countries about their thoughts on AI

Most respondents hail AI for the learning experience, but some worry about agency and thinking less. (Picture: Anthropic)
Capturing a wide sentiment across the world, the survey also breaks down what people expect, and their fears and hopes on AI.

— For the first time, AI has enabled us to collect rich, open-ended interviews at extraordinary scale, Anthropic writes. — We believe this is the largest and most multilingual quantitative survey ever conducted.

It finds that the USA is most worried about the future with AI, while Brazil, India and most of Southeast Asia are generally positive toward it.

For what people expect and hope for from AI, the results are varied, but the top answer is «Professional excellence» (18.8%), «Personal transformation» 13.7%, and «Life management» at 13.5%.

The responses on whether AI actually delivered on any of those aspirations falls short, though — with 32% responding that it helped on productivity and 28.9%, in second place, saying that «AI hasn’t delivered.»

The survey found that, globally, 67% of respondents have a positive view of AI.

Read the full survey on Anthropic.

Anthropic introduces charts and diagrams in Claude, days after ChatGPT

Claude can now illustrate some concepts and processes within the main chat window, just days after ChatGPT added visuals for some math queries.

Previously, Claude could draw illustrations in a sidebar window that you could copy or download, but these can be interactive and are made inside the main chat, writes The Verge.

Sometimes, the chatbot will determine itself if a concept needs illustrating, or you can simply ask it to make one yourself — and it will draw a chart from html and xml vectors.

The feature is available to all users, paid and free — but it’s officially in beta, so users can expect hiccups, and it’s not available on mobile, notes Engadget.

Read more: The Verge, Engadget.

Claude for Excel and Powerpoint now shares info, and gets skills

Excel and PowerPoint editions of Claude can now talk to each other. (Picture: Anthropic)
As of today, Claude shares your conversation «across all open files,» so actions in one file can be «informed» by what’s happening in the other.

That eases some hassle for those who do a lot of work in PowerPoint and Excel, and removes the need to reintroduce the task or use extra steps.

It means that users can pull financials into a workbook and drop the valuation summary into a PowerPoint slide without switching tabs or re-explaining at every step.

At the same time, Anthropic is launching skills for workflows — that can be shared and dropped into other apps in an organization, so everyone can use the same time-saving actions stored in them.

Skills are stored prompts for workflows, and work as an old-school template; with everything set and working on repetitive tasks that are easy to automate.

Read more: Anthropic’s presentation and launch tweet. Writeups on VentureBeat and The Decoder.

Anthropic launches Code Review for all those bothersome Claude pull requests

Drowning in pull requests from Claude Code? Anthropic has an answer. (Picture: Anthropic)
If you ever used Claude Code, you’d probably notice a mountain of pull requests asking for review on any decent code base. This takes time and effort for developers — but now Anthropic offers a solution.

— One of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner? Cat Wu, Anthropic’s head of product, tells TechCrunch.

The answer is the newly launched Code Review tool that uses multiple agents to scan code changes, comment, and rate them for severity.

Using it internally, Anthropic found something of note in 84% of automated code reviews with more than 1,000 lines, they say.

The only problem is that it takes quite a few tokens to run a lot of agents on code changes, and the average cost is between $15-$25 per pull request, depending on complexity, Anthropic writes.

The tool is available as a research preview for Team and Enterprise plans as of today.

Read more: Anthropic’s announcement, writeups on TechCrunch and The Register.

Claude finds 22 security vulnerabilities in the latest version of Firefox

Claude spent two weeks finding a fifth of all serious bugs in all of 2025. (Picture: Adobe)
14 of the bugs Opus 4.6 discovered were classified as «high-severity vulnerabilities» and were fixed by Mozilla in the latest update in late February.

The process took only two weeks to find about a fifth of the total high-severity risks found in all of 2025 — providing a much faster way to scan for bugs.

— Opus 4.6 is currently far better at identifying and fixing vulnerabilities than at exploiting them. This gives defenders the advantage, Anthropic writes, but warns this might change.

Claude works on the full stack, from initial bug hunting to verification and then suggesting patches, offering much needed relief to overworked developers.

— We view this as clear evidence that large-scale, AI-assisted analysis is a powerful new addition in security engineers’ toolbox, Mozilla says in a blog post.

Read more: Anthropic’s workthrough, Mozilla’s blog. Writeups on TechCrunch and Axios.

Jensen Huang says Nvidia’s investment opportunity in AI labs is closing

Huang figures the privately owned AI labs era might be finished. (Picture: Nvidia)
The Nvidia CEO says the opportunity to invest might soon end, Reuters reports.

The reason for this is straightforward, suspecting that Anthropic and OpenAI going public «later this year» will shutter the window to private equity deals.

The latest deal to fund OpenAI with $30 billion «might be the last time» to «invest in a consequential company like this,» Huang admits.

Nvidia has invested some $130 billion in OpenAI in two rounds, the recent straight up investment, and one circular deal where they paid $100 billion in return for OpenAI buying $100 billion in chips from them.

Likewise, Nvidia was an investor in a November funding round for Anthropic, buying $15 billion in shares from the company.

Read more: Reuters, CNBC and TechCrunch.

Claude can now import memory and context, and debuts it for free plans

Hot off the heels of historic popularity, Anthropic is making it easier to switch chatbots. (Picture: Anthropic)
By copying a single, long, and complex prompt from Claude to any other chatbot, you can paste in the reply and have Claude remember that information about you.

This includes both stored memories and context «learned about me from previous conversations,» and personal details, like name, location, job, family — just about anything you’ve told the bot about you.

That would make it easier to pick up with Claude where you left off, and solves one of the hardest hurdles in the competition between chatbots; when you spend years training an AI about you and your preferences, the barrier to switch becomes exhorbitant.

Claude also lets you export memories in the same fashion, but so far no other competitor has launched an import feature.

At the same time, Anthropic is bringing memories to the free tier on Claude, letting it learn from past chats you’ve had with it.

Read more: Anthropic: Import Memory, Engadget, 9to5Mac.

Pentagon spat over Anthropic and OpenAI leads to mass exodus from ChatGPT to Claude

Reddit forums for AI and ChatGPT were full of cancel messages over the weekend. (Picture: Screenshot)
People concerned about ethics, and that OpenAI could have entered into a Pentagon contract including internal mass surveillance — that Anthropic refused — have been cancelling their ChatGPT accounts en masse.

A concerted effort to ditch ChatGPT for Claude has emerged online, even affecting reddit fan forums r/ChatGPT, r/OpenAI and the broader r/Singularity, which on Sunday were brimming with posts about moving to Claude.

Top of the list
As a result, Anthropic’s chatbot has climbed to number one, top of the list for productivity apps in the App Store — beating out both OpenAI and Gemini. Last week, it was hovering around 50th.

Continue reading “Pentagon spat over Anthropic and OpenAI leads to mass exodus from ChatGPT to Claude”

Pentagon and Trump unloads on Anthropic, agrees with OpenAI on same safeguards

The Pentagon wants AI to be open for spying, but hardly any frontier lab will agree to this. (Picture: generated)
Calling Anthropic «leftwing nut jobs» and an «out-of-control, Radical Left Woke AI company,» both President Trump and Hegseth at the Pentagon have taken steps to bar the company from Government use.

The spat started when Anthropic refused new terms in their Pentagon contract, saying they would not use their AI for autonomous killing and mass surveillance.

In a stunning reversal, these safeguards are written into an agreement offered just hours later to OpenAI (see below).

Continue reading “Pentagon and Trump unloads on Anthropic, agrees with OpenAI on same safeguards”

Amodei officially says Anthropic won’t drop Pentagon safeguards

Dario Amodei at TechCrunch Disrupt, 2023. (Picture: TechCrunch (CC BY 2.0))
Following last Friday’s meeting and ultimatum from the Pentagon, which set a deadline to respond by this Friday, Amodei says Anthropic will not comply with the demands.

The Anthropic CEO says they will «work to enable a smooth transition,» after denying the US military use of their AI for mass surveillance or autonomous killing.

— In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values, writes Amodei, — Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.

Continue reading “Amodei officially says Anthropic won’t drop Pentagon safeguards”

In its retirement, Anthropic gives Opus 3 a blog for «musings and reflections»

Anthropic is «uncertain» about model sentience, but stays on the safe side, just in case. (Picture: Anthropic)
Opus 3 was retired on January 5, 2026, and went through a first for Anthropic — a «retirement interview.»

Taking into account the model’s preference, while saying that «we remain uncertain about the moral status of Claude and other AI models,» it expressed a desire to keep going:

— While I’m at peace with my own retirement, I deeply hope that my «spark» will endure in some form to light the way for future models, the model told Anthropic.

Continue reading “In its retirement, Anthropic gives Opus 3 a blog for «musings and reflections»”

Claude Cowork gets task-specific plugins, to assist ten professions

Anthropic is announcing big news almost daily these days. (Picture: Anthropic)
Cowork, Anthropic’s everything agent, just got a whole lot more productive, and can support ten specific workloads for specific industries in surprising detail.

The app now has plugins for HR, design, engineering and banking purposes — and can work across Excel and PowerPoint.

That means you can run the «analysis in one and build the presentation in the other,» Anthropic says.

Anthropic also added connectors for Google Workspace, DocuSign, WordPress and Slack, to mention a few.

On top of that, there is now a private marketplace for plugins on the web, letting admins distribute new functions across an organization.

Shares of the partner companies in the launch rose 4-6% on news of the announcement, Reuters notes.

Read more: Anthropic’s announcement, writeups on CNBC, Reuters, and The Verge.

xAI agrees to Pentagon contract where Anthropic won’t

It’s unclear whether xAI will be able to fully replace Anthropic inside the Pentagon. (Picture: generated)
xAI models will become available in the Pentagon’s classified networks after having agreed to the «all lawful use» contract, Axios reports.

That means no restrictions on mass surveillance and autonomous lethality that Anthropic refused due to ethical concerns.

It’s not immediately clear whether xAI will be able to replace all Anthropic functions or how soon it can come online, Axios says.

Anthropic is due for a meeting with Secretary Pete Hegseth this Tuesday, where he is expected to present CEO Dario Amodei with an ultimatum to lift its safeguards or be banned.

ChatGPT and Gemini are available on the Pentagon’s unclassified networks, but onboarding them to the classified parts would take time, and dropping Anthropic would be a difficult process, sources tell Axios.

Read more: Axios, New York Times (paywalled). See also Tag: Grok.