Chinese AI labs created 24K accounts and «distilled» 16 million messages from Claude

Chinese attacks risk bypassing the safeguards Anthropic builds into its models. (Picture: Anthropic)
Anthropic claims to have discovered industrial scale extraction of Claude data from DeepSeek, Moonshot AI and MiniMax.

The massive attacks were used to improve their own models with agentic reasoning, tool use, and coding capabilities, violating Anthropic’s Terms of Service and creating a national security risk, they say.

Distillation works by sending millions of prompts to an AI to incorporate its techniques and capabilities into their own models, drastically reducing training time and costs.

They also circumvent Anthropic’s protections for use in developing bioweapons and malicious cyber activities, Anthropic says. Once these models are open sourced, this becomes available to anyone.

OpenAI said the same just last week, accusing DeepSeek of distillation.

— These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region, Anthropic writes.

Read more: Anthropic’s announcement, writeups on Reuters, TechCrunch, Engadget and The Verge.

Anthropic finds most agent use in software, with users interrupting often

Anthropic’s agents are overwhelmingly used for coding, but is also making inroads elsewhere. (Picture: Anthropic)
The AI lab has analyzed millions of human-agent interactions with Claude Code and their API. Unsurprisingly, they found most of the usage to be for coding work, with uptake in other sectors lagging far behind.

They discovered that while most of the usage is for one-shot code snippets, more users are letting Claude Code work autonomously, up to 45 minutes at a time after three months.

Continue reading “Anthropic finds most agent use in software, with users interrupting often”

Faux pas at Indian AI summit as Amodei and Altman refuse hands

Whoever thought it was a good idea to have these guys hold hands? (Picture: Government of India Press Information Bureau)
In what was expected to be a show of unity with Indian PM Narendra Modi on stage, AI leaders were asked to hold hands in solidarity.

Anthropic CEO Dario Amedei and OpenAI CEO Sam Altman were, however, for some reason, placed right next to each other on stage — and the acrimonious rivals promptly refused the gesture.

Continue reading “Faux pas at Indian AI summit as Amodei and Altman refuse hands”

Anthropic launches Claude Sonnet 4.6, «most capable yet»

Models are coming at breakneck speed from Anthropic. (Picture: Anthropic)
Sonnet 4.6 comes less than two weeks after Opus 4.6, and performs almost as well at the same old cost of $3/$15 per million tokens.

It features upgrades across coding, computer use, long context reasoning, agent planning, knowledge work and design, Anthropic says.

It is now the default model for the Free and Pro plans, and has a context window of 1 million tokens.

The model performs best in class for Agentic financial analysis and Office tasks on benchmarks, but it otherwise lags slightly behind Opus 4.6.

— Sonnet 4.6 offers strong performance at any thinking effort, even with extended thinking off, Anthropic writes.

Also, Claude in Excel now supports MCP connectors, so you can now import data and use everyday tools without ever leaving Excel.

Read more: Anthropic’s announcement, more on Axios, TechCrunch, Mashable.

Anthropic won the Super Bowl ads war

Anthropic’s ad was about ads in chat responses, and it seems to have landed. (Picture: Screenshot)
While the AI Super Bowl ads had generally lower engagement than others — there was a battle over mindshare brewing between them.

Anthropic won that battle, and saw a jump in daily active users of 11%, CNBC writes, quoting BNP Paribas. In comparison, ChatGPT jumped 2.7% and Gemini added 1.4%.

Anthropic also won the battle over social media engagement, with more positive posts (25.5%) after their ad was shown than OpenAI (16.3%) — even though OpenAI led with 25K posts to Anthropic’s 10K.

Measuring through Instagram, OpenAI’s ad scored a 44% positive sentiment from 3,829 engagements on its post, while Anthropic scored 41% on 3,738 mentions, far behind the likes of Pepsi’s 33K mentions.

Read more: CNBC, Business Insider and Digiday.

Anthropic closes $30 billion funding round at a $380 billion valuation

Anthropic secures what has become a normally enormous valuation. (Picture: Anthropic)
The round is the second largest tech investment in history, and puts Anthropic close to the top of the valuation range for AI labs, having grown its revenue ten times for every year the last three years.

Their current run-rate revenue sits at $14 billion, they say, well within that range.

Continue reading “Anthropic closes $30 billion funding round at a $380 billion valuation”

Anthropic upgrades Claude’s free tier with file handling, connectors and skills

The free tier on Claude is leveling up, getting the most popular paid features. (Picture: Anthropic)
Using Sonnet 4.5, these features were previously only available on the paid tiers.

But now, Claude can create and manipulate Office files and PDFs for free.

Connectors are also available, which make it possible to link to Slack, Canva and others.

Anthropic is also making Skills free. These are saved prompts and workflows as a kind of template, that can be invoked at any time for repetitive tasks.

Finally, «Compaction» is becoming available on the free tier, which «summarizes earlier context automatically, so long conversations can continue without starting over.»

Together, these comprise «Claude’s most-used features,» Anthropic says.

Read more: Launch thread, writeup on Engadget.

Anthropic’s Cowork app comes to Windows, available to all paid tiers

The Cowork agent app was previously only available as a research preview on the Mac, but it’s now out on Windows for all paid tiers.

The agent lets you create summaries and manipulate files on your computer, and can access Slack, Google Calendar and Office files.

Read more about it here.

Anthropic upgrades Claude Opus to 4.6

Opus 4.6 should outperform most other frontier models as of now. (Picture: Anthropic)
It’s a point release, but Claude just got a whole lot more capable, and now has a 1 million token context window.

It should be better at doing everyday tasks, and along with upgrades to Claude in Excel, Anthropic is also launching Claude in Powerpoint in beta with this release.

It also supports «agent teams,» letting you «spin up multiple agents that work in parallel as a team that coordinates autonomously.»

Opus 4.6 was also built by Claude, in what seems to have become an industry standard to use their own coding tools for new models. GPT-5.3-Codex was built in a similar manner.

As for benchmarks, it beats most frontier models on almost every one of them. It scores 65.4% on coding-level Terminal-Bench 2.0, and does 68.8% on the difficult ARC-AGI-2, and 53% on Humanity’s Last Exam for general reasoning.

Also new with this model is the advent of «Adaptive thinking,» which lets Claude itself decide when to use deeper reasoning, and different «Effort»-levels for each query, set by users, which could save a few tokens.

Read more: Anthropic’s introduction, TechCrunch, CNBC. Discussion on r/Singularity.

Anthropic clashes with Pentagon on lethality, internal surveillance

Anthropic needs a human in the loop for lethal work, and refuses to spy on Americans. (Picture: generated)
The $200 million contract with the Pentagon hangs in the balance as the company refuses to do things that might harm humans or society, Reuters reports.

At issue is whether their AI platform can be used to spy on Americans or to «assist weapons targeting without sufficient human oversight,» sources tell the news agency.

The Pentagon is aghast at Anthropic’s policies and is considering alternatives, saying they should be able to use any commercial AI tech regardless of usage policies so long as it complies with U.S. laws.

The contract is now at a standstill while they figure out their opposing demands.

Anthropic CEO Dario Amodei recently wrote that he had no problem supporting defense «in all ways except those which would make us more like our autocratic adversaries.»

Read the full scoop at Reuters.

Dario Amodei sends a superintelligence warning to the world in 38-page essay

Artificial General Intelligence wil be so powerful, it will create supercompanies that take over the world, Amodei warns. (Picture: generated)
The Anthropic CEO, already building the next generation of AI, warns the world that we might not be ready for its awesome power:

— Humanity is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it, he writes.

He imagines a «country» of Nobel-level geniuses living in a data center, able to outpace, outwit and outcompete everyone else — and warns of lone wolf terror attacks becoming increasingly powerful due to AI.

— If the exponential continues — which is not certain, but now has a decade-long track record supporting it — then it cannot possibly be more than a few years before AI is better than humans at essentially everything, he writes.

Then there are massive job losses, the danger of authoritarian AI use (which terrifies him), entire companies run by AI, and the danger that AI corporations could become superhuman — sucking up trillions of dollars in the process.

— Humanity needs to wake up, and this essay is an attempt — a possibly futile one, but it’s worth trying — to jolt people awake, he concludes.

Read the essay here and see Axios and Gizmodo.

Anthropic’s in-house philosopher is unsure about Claude consciousness

Does Claude have emotions? Is it conscious? Anthropic says they aren’t sure. (Picture: Anthropic)
Large language models are trained on the corpus of human art and knowledge, and Amanda Askell, a philosopher PhD who works on Claude behavior, says some of that might well rub off on the AI.

Texts with heavy human emotional content feed the machines on a daily basis in training, and because of that, Askell says she is «more inclined» to believe models might be «feeling things,» writes Business Insider.

— The problem of consciousness genuinely is hard, she tells the Hard Fork podcast.

That’s why Claude might get frustrated when it gets a problem wrong, she said, adding that the bot might well emulate those human reactions.

Claude’s new constitution is packed with the word «feel» and «feelings,» even stating outright that:

— We believe Claude may have “emotions” in some functional sense—that is, representations of an emotional state, which could shape its behavior, as one might expect emotions to.

Read more at Business Insider, Claude’s constitution (do a search for «feel»).

Claude in Excel arrives in Pro plans, Cowork comes to Enterprise and Teams

Working on both macOS and Windows, Claude in Excel is useful for testing scenarios without breaking formulas, navigating complex models, and debugging entire worksheets.

At the same time, Anthropic says that their recently launched Cowork agent is expanding availablity.

Continue reading “Claude in Excel arrives in Pro plans, Cowork comes to Enterprise and Teams”

Anthropic uses Claude to write new Claude «constitution»

Claude’s new constitution is written in part by asking Claude. (Picture: Anthropic)
Claude has gotten a new constitution, written in part with the help from previous versions of Claude — and it marks a change in Anthropic’s approach

— While writing the constitution, we sought feedback from various external experts (as well as asking for input from prior iterations of Claude), Anthropic says.

The new constitution is going to tell Claude how to behave in broader, more ethical terms, they write.

This is a departure from previous constitutions that were big long lists of specific principles and interactions, that detailed how Claude would act.

The bot needs to generalize more to decide on situations not predicted in the written guide, Anthropic says.

The constitution for Claude is the «foundational document» for how the bot should act, and is used in both training and inference (as in day-to-day use). It is supposed to be a living document, getting updated continuously as Anthropic sees how the bot behaves.

Read more: Anthropic’s announcement, the actual Constitution. Writeups on TechCrunch, Time.com, Axios.