Anthropic scrambles to contain fallout from Claude Code source code leak

The leak quickly spread to all corners of the internet and will be hard to contain. (Picture: Anthropic)
Anthropic has sent 8,000 copyright takedown notices after their source code leaked on March 31, worrying that competitors or bad actors might try to reverse engineer their coding agent, The Wall Street Journal and TechCrunch write.

Some 2,000 files and 512,000 lines of code had been inadvertently put on a public server, then copied to Github and «forked» (copied) at least 50,000 times, according to Ars Technica.

The code itself outlines Claude Code’s memory architecture, instructions for the AI bot and an upcoming feature with an always-on background agent, The Verge reports.

Developer and writer Gabriel Anhaia hailed Anthropic’s craft, calling it «both inspiring and humbling,» amid a deluge of comments all over x.com.

The leak was not due to malicious actions, but a «human error,» Axios says, and no user data was exposed.

Read more: The WSJ, TechCrunch, Ars Technica, and The Verge.

OpenAI developer releases Codex plugin for Claude Code

Codex for Claude Code might be a tad cheeky, but it’s useful. (Picture: screenshot)
Thanks to OpenAI’s Dominik Kundel, you can now call up OpenAI’s coding agent Codex within the Claude Code environment.

The plugin is fairly easy to install and use, so long as you have a ChatGPT account to log in with.

It’s handy for people who switch between the two models, and Codex on Claude can do things like review code, do an adversarial review, or hand off the entire task — where you should be able to switch apps and finish the work in Codex.

— This plugin is a simple way to keep your Claude Code workflow and still use Codex where Codex is strong, writes OpenAI developer Vaibhav (VB) Srivastav in the instructions.

Whether Anthropic will like Codex integration in its flagship coding product is anyones guess.

Read more: Announcement tweet, OpenAI dev community, and instructions for use.

Github Copilot to start training on user interactions from April 24

Github is coming for your code, after a successful trial on internal Microsoft data. (Picture: Github)
If you ever used Github to complete your code, your data can now be «used to train and improve our AI models,» Github says.

This comes after a trial period where Copilot has been feeding on internal Microsoft engineers’ data, which they say «improved model performance.»

They will not train on your entire code repositories, and will only use your interactions with Copilot — including accepted outputs, inputs sent to the model and «code context.»

Github is hardly alone in doing this, as Anthropic and OpenAI have been doing this for more than half a year. It’s common industry practice.

If you don’t like Copilot training on your data, you can opt out on the Copilot features page.

Read more: Github’s announcement, How-to Geek.

Anthropic launches Code Review for all those bothersome Claude pull requests

Drowning in pull requests from Claude Code? Anthropic has an answer. (Picture: Anthropic)
If you ever used Claude Code, you’d probably notice a mountain of pull requests asking for review on any decent code base. This takes time and effort for developers — but now Anthropic offers a solution.

— One of the questions that we keep getting from enterprise leaders is: Now that Claude Code is putting up a bunch of pull requests, how do I make sure that those get reviewed in an efficient manner? Cat Wu, Anthropic’s head of product, tells TechCrunch.

The answer is the newly launched Code Review tool that uses multiple agents to scan code changes, comment, and rate them for severity.

Using it internally, Anthropic found something of note in 84% of automated code reviews with more than 1,000 lines, they say.

The only problem is that it takes quite a few tokens to run a lot of agents on code changes, and the average cost is between $15-$25 per pull request, depending on complexity, Anthropic writes.

The tool is available as a research preview for Team and Enterprise plans as of today.

Read more: Anthropic’s announcement, writeups on TechCrunch and The Register.

Anthropic finds most agent use in software, with users interrupting often

Anthropic’s agents are overwhelmingly used for coding, but is also making inroads elsewhere. (Picture: Anthropic)
The AI lab has analyzed millions of human-agent interactions with Claude Code and their API. Unsurprisingly, they found most of the usage to be for coding work, with uptake in other sectors lagging far behind.

They discovered that while most of the usage is for one-shot code snippets, more users are letting Claude Code work autonomously, up to 45 minutes at a time after three months.

Continue reading “Anthropic finds most agent use in software, with users interrupting often”

Anthropic unlocks fast mode for Claude Code, 2.5x faster at 6x the price

For «work like rapid iteration or live debugging» Anthropic is letting users go fast, by toggling «/fast» on their console.

The feature is available in a «limited research preview» in the API for paid users with «extra usage» enabled, and there is a waiting list to get on it.

It’s the same model with the same capabilities and there is no change other than the speed — and cost.

The price for using fast mode is $30 for input and $150 in output for a million tokens, but there is a 50% discount until February 16.

Read more: Details on Claude, launch post.

Apple’s Xcode gets full agentic coding support for Claude and Codex

The new Xcode frees the developer from coding, so you can focus on innovation. (Picture: Apple)
With Xcode 26.3, there is full support for Claude and Codex — from idea iteration to file creation and structural editing.

Agents can also verify their work, and «collaborate throughout the entire development cycle,» Apple says.

They can search documentation, update project settings and «explore file structures.» Additionally, users can track what the agents are doing in the sidebar and adjust progress.

It only takes a single click to switch between models and pick the one best suited for the task, MacRumors writes.

Although Apple has worked exclusively with Anthropic and OpenAI to implement the features, any agent can be used — so long as they are using the Model Context Protocol.

Read more: Apple’s release, writeups on MacRumors, Engadget.

Google partners with Replit to bring «vibe coding» to the enterprise

Replit tightens its integration with Google models and Cloud.
Vibe Coding is comping for the enterprise. (Picture: Google, modified)
Decade old Replit has a valuation of $3 billion dollars and is a «leader» in the AI Vibe coding space, writes CNBC, and they are now tightening their integration with Google Cloud and the Gemini models.

—The goal for us, and Google, is to make enterprise vibe-coding a thing, Replit founder and CEO Amjad Masad said; — We want to show the world that these tools are actually going to transform businesses and how people work.

Under the new agreement, Replit will expand its Google Cloud use and «further integrate Google’s models into its platform,» Google writes on the deal.

Replit will gain access to all of the Gemini models, and the deal will «help enterprise customers embrace vibe coding.»

— Our mission is to enable the next billion software creators — from hobbyists to entrepreneurs to enterprises, Masad said.

Read more: Google’s announcement, writeup on CNBC.

Weekend roundup; expanded Sora, security research and the battle for India

For a limited time, Sora is available without invite codes for select countries, but the 30 generations per day limit may have to go.
No more invite codes for select countries in Sora 2, and bevy of new features. (Picture: generated)

Sora 2 expands, is now available without invite codes
Following the massive success of the Sora 2 video generator, OpenAI is opening up the service for those without invite codes in the USA, Canada, Japan and Korea «for a limited time.» Simultaneously, they are announcing reusable characters that can feature in more than one video and an easier way to stitch videos together. If that wasn’t enough, OpenAI is adding more video generations for power users hitting the 30-per-day generation limits and letting them pay for more gens. They are also musing about letting rightsholders get compensation for the reuse of their characters, as a means of getting paid for your work on the platform. They do warn that 30 gens needs too many GPUs and will be throttled at some stage.
More at: MacRumors and a Twitter announcement, list of available countries.

OpenAI reveals security research agent in beta
The new agent, Aardvark, will look through code repositories at scale almost like a human would, and find errors and exploits before the bad guys do. It will continually analyze your source code and find vulnerabilities. The agent has already been used to find «numerous» vulnerabilities in open source software, and OpenAI will provide pro bono scanning to «select, non-commercial» OSS systems. Aardvark is not being widely released, existing instead as a private beta inside OpenAI’s offices, kind of like Google’s CodeMender.
More at OpenAI’s announcement and ZDNet.

Read on for more!

Continue reading “Weekend roundup; expanded Sora, security research and the battle for India”

OpenAI’s 2025 Dev Day with Altman livestream incoming

Speculation is rife as to what Altman might announce at the livestream.
Will it be a browser, a new image model, or the highly anticipated AI device? It’s too soon to tell. (Picture: generated)
The October 6. stream will be an excellent moment to announce product news for the OpenAI CEO, but only a few items remain on their to-do-list.

Nothing has been announced but a cryptic tweet promising «new ships,» which could mean anything from new models to new modalities:

Continue reading “OpenAI’s 2025 Dev Day with Altman livestream incoming”

OpenAI tops ICPC coding contest for students, Google finished second

OpenAI solved 12 of 12 problems with vanilla GPT-5. Google had a custom model and solved 10.
OpenAI says they will now focus on scientific discovery. (Picture: OpenAI)
ChatGPT solved all 12 of 12 problems in the 2025 International Collegiate Programming Contest (ICPC) — an algorithmic programming contest for university students.

That result would have given it first place if it were human, as the best college teams only solved eleven.

Google also participated with a custom Gemini 2.5 Deep Think and earned Gold status, solving 10 of the problems and finishing second, Google claims.

Continue reading “OpenAI tops ICPC coding contest for students, Google finished second”

OpenAI announces GPT-5 Codex

GPT-5 Codex is slightly better than vanilla GPT-5 in benchmarks.
OpenAI is especially proud of the code review function in the new Codex. (Picture: OpenAI)
Savvy users have been using GPT-5-high with the Codex CLI (Command Line Interface) on their terminals for weeks, and consensus seems to be that it competes well with Claude.

Now, OpenAI is launching a custom, optimized version of GPT-5 for the Codex coding agent that they say is faster, more reliable and more steerable than before.

Continue reading “OpenAI announces GPT-5 Codex”

Google hires top execs, team from Windsurf — upending OpenAIs deal

Google hires top execs and talent from Windsurf
Just as talks with OpenAI ended, Windsurf turned to Google. (Picture: Windsurf)
OpenAI had been negotiating a $3 billion to acquire the agentic coding platform, but Google just snagged their top executives to work in their field for its Gemini platform.

The deal will see Windsurf CEO Varun Mohan, co-founder Douglas Chen and a small team join Google’s DeepMind division.

Also licensing key tech
Further, Google will invest $2.4 billion in a non-exclusive deal to license Windsurf’s technology, reports Reuters, among others.

OpenAI had been in long-winding talks to buy the company, in its biggest deal yet, and many said it was just around the corner, as late as in May, 2025.

Continue reading “Google hires top execs, team from Windsurf — upending OpenAIs deal”

OpenAI’s Codex now available to ChatGPT Plus users

ChatGPT Plus-tier gets access to Codex!
Wider availability for Codex likely means even more pressure on the coding market. (Picture: Chatgpt.com)
Caught this morning, there seems to be a new option in the sidebar at Chatgpt.com for the new Codex coding model — meaning it has expanded access.

Codex is the latest coding agent from OpenAI that runs on a modified o3-model.

Super-coding agent
It can generate several instances of code from your prompts, and even run them in a sandbox to select the best/most efficient version.

OpenAI says it can complete tasks autonomously that would otherwise take hours or days to finish, and they are using it themselves to offload repetitive tasks.

The Plus membership for ChatGPT is $20 a month, and Codex launched as a «research preview» in May for Pro users, who fork out $200 a month.

Update: It appears Codex now also has Internet acccess, which is off by default and comes with a stern warning.

See also: teknotum on the Codex launch, and the announcement thread on X.

Anthropic claims world’s best coding AI with Claude 4 Opus and Sonnet

World's best coding model? According to Anthropic, yes, of course.
Anthopic’s new agentic, thinking and reasoning models are great for coding, and plays Pokemon for 24 hour runs. (Picture: Anthropic)
Opus 4 can sustain almost a full work day of focused coding work, while Sonnet 4 is supposed to be excellent for thinking and reasoning.

Both models produce near-instant responses to queries, but can turn to reasoning and thinking for more demanding requests.

World’s best on coding?
Anthropic claims Opus is «the world’s best coding model,» and it edges out Gemini 2.5 Pro, o3 and GTP 4.1 on SWE-bench Verified, but cannot surpass OpenAI’s o3 on certain PhD-level benchmarks, according to TechCrunch.

Continue reading “Anthropic claims world’s best coding AI with Claude 4 Opus and Sonnet”