Anthropic’s in-house philosopher is unsure about Claude consciousness

Does Claude have emotions? Is it conscious? Anthropic says they aren’t sure. (Picture: Anthropic)
Large language models are trained on the corpus of human art and knowledge, and Amanda Askell, a philosopher PhD who works on Claude behavior, says some of that might well rub off on the AI.

Texts with heavy human emotional content feed the machines on a daily basis in training, and because of that, Askell says she is «more inclined» to believe models might be «feeling things,» writes Business Insider.

— The problem of consciousness genuinely is hard, she tells the Hard Fork podcast.

That’s why Claude might get frustrated when it gets a problem wrong, she said, adding that the bot might well emulate those human reactions.

Claude’s new constitution is packed with the word «feel» and «feelings,» even stating outright that:

— We believe Claude may have “emotions” in some functional sense—that is, representations of an emotional state, which could shape its behavior, as one might expect emotions to.

Read more at Business Insider, Claude’s constitution (do a search for «feel»).