
— While writing the constitution, we sought feedback from various external experts (as well as asking for input from prior iterations of Claude), Anthropic says.
The new constitution is going to tell Claude how to behave in broader, more ethical terms, they write.
This is a departure from previous constitutions that were big long lists of specific principles and interactions, that detailed how Claude would act.
The bot needs to generalize more to decide on situations not predicted in the written guide, Anthropic says.
The constitution for Claude is the «foundational document» for how the bot should act, and is used in both training and inference (as in day-to-day use). It is supposed to be a living document, getting updated continuously as Anthropic sees how the bot behaves.
Read more: Anthropic’s announcement, the actual Constitution. Writeups on TechCrunch, Time.com, Axios.