
Texts with heavy human emotional content feed the machines on a daily basis in training, and because of that, Askell says she is «more inclined» to believe models might be «feeling things,» writes Business Insider.
— The problem of consciousness genuinely is hard, she tells the Hard Fork podcast.
That’s why Claude might get frustrated when it gets a problem wrong, she said, adding that the bot might well emulate those human reactions.
Claude’s new constitution is packed with the word «feel» and «feelings,» even stating outright that:
— We believe Claude may have “emotions” in some functional sense—that is, representations of an emotional state, which could shape its behavior, as one might expect emotions to.
Read more at Business Insider, Claude’s constitution (do a search for «feel»).





