
These numbers are so impressive, one analyst says they lack comparison:
Continue reading “Anthropic reaches $3 billion in revenue so far this year”

These numbers are so impressive, one analyst says they lack comparison:
Continue reading “Anthropic reaches $3 billion in revenue so far this year”

— AI could wipe out half of all entry-level white collar jobs, he tells Axios — and spike unemployment to 10-20% in the next five years.
Continue reading “Anthropic CEO says it’s time to wake up on AI job losses”

Both models produce near-instant responses to queries, but can turn to reasoning and thinking for more demanding requests.
World’s best on coding?
Anthropic claims Opus is «the world’s best coding model,» and it edges out Gemini 2.5 Pro, o3 and GTP 4.1 on SWE-bench Verified, but cannot surpass OpenAI’s o3 on certain PhD-level benchmarks, according to TechCrunch.
Continue reading “Anthropic claims world’s best coding AI with Claude 4 Opus and Sonnet”

Continue reading “Apple says considering AI search in Safari, but «not good enough yet»”

In a recent interview with Axios focusing mostly on security issues, Anthropic said «virtual employees» will be a step up from using mere «agents» on corporate networks.
This will be the next AI innovation, said Jason Clinton, the company’s chief of information security.
Whereas agents can focus on specific, programmable tasks, acts with some autonomy and of course require oversight, a «virtual employee» takes it a step further, with having their own memories and their own corporate accounts and passwords.
This is a major headache for cybersecurity, Clinton further explained, about oversight and hackability of the new employees.
Go read the full story here – at Axios, and check out Anthropics research on agents.

Training an LLM like Claude often consists of unmonitored consumption of huge amounts of data, with minimal human involvement.
— Language models like Claude aren’t programmed directly by humans, says Anthropic, — They arrive inscrutable to us, the model’s developers. This means that we don’t understand how models do most of the things they do, they add.
Tracing the thoughts of an LLM
Now they have set out to change that, with a couple of scientific studies mapping out the internal reasoning, or how the model actually thinks in response to normal prompts.
Continue reading “Claude AI reveals surprising internal thinking, says Anthropic”

This is just one of many lawsuits against AI companies claiming they copied and used copyrighted materials in training their models, that hinges on the fair use provision of the copyright laws
This resurfaced overnight after Chatgpt’s new image generator was used en masse to produce images in the style of Studio Ghibli, which had obviously been used in training the model in so far that it could easily mimic their style.
Continue reading “Anthropic scores early copyright win in battle with music publishers”