OpenAI launches parental safety controls for ChatGPT after a teen’s death

Getting your Trinity Audio player ready...
Photo: Sam Altman has been the chief executive officer of OpenAI, which makes ChatGPT, since 2019 (Koshiro K/Shutterstock).

After Southern California teenager Adam Raine died by suicide this spring, his parents filed a lawsuit blaming OpenAI’s “deliberate design choices” for the death, citing numerous instances where ChatGPT provided him with specific advice on how to kill himself. 

Now, OpenAI has launched parental controls that parents can use with their teenagers. The update, released worldwide yesterday, enables parents and law enforcement to get notifications if a user between 13 and 18 talks to the chatbot about suicide or self-harm.

The update will also make changes to the content experience for teens using ChatGPT. If a teen user types in a prompt related to self-harm or suicidal ideation, that prompt will be forwarded to a team of human reviewers who decide whether to trigger a potential parental notification – though those are expected to come hours after a concerning conversation has been flagged.

If OpenAI’s teams determine a child is in danger, and parents can’t be reached, they might also contact law enforcement. Parents can also limit hours, voice mode and image tools.

“Once parents and teens connect their accounts, the teen account will automatically get additional content protections including reduced graphic content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals, to help keep their experience age-appropriate,” OpenAI said in a blog post announcing the launch. 

The blogpost also said that OpenAI has been working with Common Sense Media, a nonprofit that reviews tech products and advocates for improved child safety, as well as the attorneys general of California and Delaware, and expects to refine and expand on the new controls over time.

“These parental controls are a good starting point for parents in managing their teen’s ChatGPT use,” Robbie Torney, Common Sense Media’s senior director for AI programs, was quoted as saying in the blog post. 

Receive thoughtful coverage of mental health policy and solutions daily.

Subscribe to our free newsletter!

The name “MindSite News” is used with the express permission of Mindsight Institute, an educational organization offering online learning and in-person workshops in the field of mental health and wellbeing. MindSite News and Mindsight Institute are separate, unaffiliated entities that are aligned in making science accessible and promoting mental health globally.

Creative Commons License

Authors

Diana Hembree is co-founding editor of MindSite News . She is a health and science journalist who served as a senior editor at Time Inc. Health and its physician’s magazine, Hippocrates, and as news editor at the Center for Investigative Reporting for more than 10 years.

Rob Waters, the founding editor of MindSite News, is an award-winning health and mental health journalist. He was a contributing writer to Health Affairs and has worked as a staff reporter or editor at Bloomberg News, Time Inc. Health and Psychotherapy Networker. His articles have appeared in the Washington Post, Kaiser Health News, STAT, the Atlantic.com, Mother Jones and many other outlets. He was a 2005 fellow with the Carter Center for Mental Health Journalism.

Comments (1)

Comments are closed.

Take our reader survey and help shape MindSite News reporting

Close the CTA