Sam Altman on teen use: «Some of our principles are in conflict»

OpenAI will start automatic age checks on its users, and direct teens to clean, "age-appropriate" version.
Happy and clean ChatGPT is coming for teens, and it will call the cops if you cross the line. (Picture: generated).
Trying to balance freedom with safety, OpenAI is going all in on an age-appropriate version of ChatGPT.

Teen use of chatbots and their potential harm is rapidly becoming a hot-button political issue, complete with a Congressional hearing and an FTC probe.

OpenAI is therefore reiterating their new policies on teen use and parental controls, and says they will be rolling out automatic age verification for under-18 users that should default to the teen version when in doubt.

Teen-GPT should be pure vanilla
This coming teen edition will block graphic sexual content, stop it with the flirting and potentially involve law enforcement to ensure safety if it fears imminent harm and can’t reach the parents.

— When some of our principles are in conflict, we prioritize teen safety ahead of privacy and freedom, OpenAI CEO Sam Altman explains in a blog post, — These are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.

It is possible to get around the age block by getting the parents to supply proof of age to unlock the adult version for those improperly tagged under-age, and in some countries they might be forced to use it.

More personal conversations
After several accusations of wrongful deaths lately, OpenAI has to balance «tensions between teen safety, freedom, and privacy,» Altman writes, adding that people are increasingly turning to ChatGPT for more sensitive conversations — and that this information should be privileged, as it is with doctors.

This privilege only applies to adults, though, as signs of distress and possible harm will cause the involvement of the parents of teens.

It also comes with a couple of carve-outs for «serious misuse» according to Altman, saying that «threats to someone’s life, plans to harm others, or societal-scale harm like a potential massive cybersecurity incident» could be elevated to human review — and potentially law enforcement.

Will call the cops
«Treating adults like adults,» Altman says, means that adult users should have the freedom to explore some of these themes, even suicide, in a fictional setting, as in if someone is writing a story.

Teen-GPT should, however, not engage in things like suicide ideation in any setting, and will contact the parents if possible, or authorities if necessary, in such situations.

— We realize that these principles are in conflict and not everyone will agree with how we are resolving that conflict, Altman says.

Teknotum wrote about the other teen features at announcement.

Read more: OpenAI on age-sensitive content, Sam Altman’s statement. Policy implications from Axios, writeup on The Verge.