Anthropic starts training models on new Claude chats

Everybody does it, and now Anthropic will also train their models on user chats.
Claude will start using user data to train and improve their models, as most already do. (Picture: Anthropic)
As of today, there is a new option in the settings in the Claude app that lets you agree to «improve and strengthen» the model.

This applies to all private users, and if you opt in, your chats will be used not only for training future models, but to improve the safety of the current ones:

— We’re now giving users the choice to allow their data to be used to improve Claude and strengthen our safeguards against harmful usage like scams and abuse, Anthropic writes.

Chats kept for five years
Once agreed, the chats will also be kept for up to five years by Anthropic, as opposed to the previous 30 day deletion default. If you opt out, the old terms apply.

The new policy applies to free, Pro and Max plans, but do not apply to business users on Work, Gov, Edu or API use, or for third party access.

Agreeing will log all your chats for training as of agreement, but it won’t apply to past chats unless you reengage with them.

Won’t be able to use
The deadline for accepting or denying the new choice is September 28, and you won’t be able to use Claude unless you opt in or out.

For those interested in staying ahead, you can find the toggle in the Privacy settings.

The idea of using chat data to train future models or improve safety is not a new one. Google and OpenAI say they may do this for private plans in Gemini and ChatGPT (with History turned on), and Meta AI says all inputs can be used.

Read more: Anthropic’s announcement, writeups on The Verge, TechCrunch.