Court rejects free speech rights for AI chatbots — for now

Can an AI be sued for the wrongful death of a young teen to suicide? Or do they have free speech rights? A trial will now decide.
A young teen was allegedly encouraged into suicide. Now the question is whether an AI chatbot can be held accountable. (Picture: Character.ai)
In a Florida court case that could one day define First Amendment rights for AI, a judge has declined to consider the defendant’s arguments for dismissal — setting the stage for a trial showdown over whether AI chatbots are faulty products or entities entitled to rights previously reserved for humans.

The case revolves around a teen whom Character.ai allegedly encouraged into suicide over a long interaction, and whether the developers should be held accountable for his tragic death, in a wrongful death lawsuit.

Malfunction machine, or a free mind?
The defendants, Google and Character.ai have filed for a summary dismissal on the grounds that chatbots have free speech rights, which are protected under law.

The plaintiff, the teen’s mother, argued that they made a flawed product they can be held liable for.

The U.S. Senior District Judge Anne Conway presiding did not rule on the actual merits of this argument, saying instead that she is «not prepared» to hold that the chatbots’ output constitutes speech «at this stage,» but denied the motion to dismiss.

Real battle looming
This means the case will go ahead to a full trial, setting up a legal showdown on just how much freedom and responsibility AI systems can be assigned — if any.

— The order certainly sets it up as a potential test case for some broader issues involving AI, said Lyrissa Barnett Lidsky, a law professor at the University of Florida, to The Associated Press.

The developers of the chatbot said ruling against free speech rights for AI bots will have a «chilling effect» for the entire industry.

Read more: Court report from The Associated Press, The Washington Post has details on the conversations with the chatbot.