
They also require that any AI that engages with people should announce at the start that they are talking to an AI, with new warnings every two hours, writes Gizmodo.
Also forbidden are «illegal religious activities,» obscenity, violence or crime, and the list goes on to cover libel and insults, material that damages relationships — or encouraging self harm and suicide.
There aren’t just warnings for using AI chatbots too long, but providers should also assess the user’s emotional state — and take «necessary measures to intervene,» writes Reuters.