Your Teen Is Probably Going to Turn to AI for Advice – Check Out the APA’s Tips on Helping Them Stay Safe and Grounded 

As teens turn to AI for advice, experts say that patient should work with them to make sure they use AI safely.

Getting your Trinity Audio player ready...
A child sits at a laptop computer, holding a stylus pen. An illustration on top of the image shows a robot with speech bubbles around its head.
Photo: CLStock/Shutterstock

To keep it 100, I’m opting to share this blog post from the American Psychological Association (APA) because of how much its headline alarmed me. It’s a possibility I hate to even imagine.

I believe my daughter and I have such a strong relationship that she’ll come to me with nearly any issue for support, now and in the future. It’s possible that’s delusional thinking, and I know that my assumptions will be challenged when puberty arrives (parents of teens, feel free to get at me). But I feel like the APA is backing me up.

More and more people are turning to AI chatbots for help with everything from work to food, and even personal relationships – teens are by no means exempt from the draw. Besides their ubiquity, chatbots tend to be set up to refrain from judgment and to respond with warmth and affirmation.

They can offer superb support with some high school homework, and are a useful and always-available thought partner, so it’s not hard to see why adolescents may turn to them when grappling with a concern, especially one they’d find mortifying to bring before a parent. 

But the ease and warmth that makes AI tools easy to open up to is also what makes them tricky, says clinical psychologist Joshua Goodman.

“It’s not going to punish you, ground you, or otherwise be disparaging,” Goodman said, adding that “it isn’t helping young people to grow in the ways that are going to be most beneficial for them in the long run.”

A related issue is that teens may be less likely to question what AI tells them, or recognize any biases and sycophancy built into their design. And unlike a conversation with a therapist or trusted friend, what teens share with a chatbot is often stored, analyzed, and potentially used to train AI systems.

Receive thoughtful coverage of mental health policy and solutions daily.

Subscribe to our free newsletter!

It’s good news, then, that parents continue to have significant influence in the lives of their children. Teens can, will, and do turn to AI with questions, but experts affirm that parents are irreplaceable. Armed with knowledge and patience, parents who remain emotionally and socially engaged with their children can work confidently with them to make sure their AI use stays safe. 

You did read that right – they’re already using the tech, and a ban is unlikely to last. (Have you Googled anything recently?) So our power as parents lies in showing kids how to use it properly. APA experts suggest testing AI together, running a query through a chatbot side-by-side and then discussing its response.

Use that conversation around the output to model critical thinking. And if you’re still learning about the tech, it’s okay – it’s less about giving lessons, and more about keeping the lines of communication open, Goodman said. “You don’t have to be an expert on AI. Be honest with your teen if there’s something you don’t know.” 

In that vein, if you truly want to limit their use, remember that boundaries work best when teens are part of setting them, said Amber W. Childs. “They’re better able to understand the reasoning behind them and much more likely to follow them.”

Strategies suggested include tech-free mealtimes, agreed-upon topics that require human discussion, and simple check-ins about kids’ AI use. 

And watch out for red flags. The APA notes that professional support is sometimes essential, especially if your child is discussing self-harm, serious depression or suicide with an AI chatbot. There’s a community around a child who can help, including at their school.

Also, stay alert about your teen’s AI activity – a teen who calls a chatbot their friend, becomes irritable without access to AI, or starts pulling away real relationships might need more than a conversation. 

The name “MindSite News” is used with the express permission of Mindsight Institute, an educational organization offering online learning and in-person workshops in the field of mental health and wellbeing. MindSite News and Mindsight Institute are separate, unaffiliated entities that are aligned in making science accessible and promoting mental health globally.

Author

Courtney Wise Randolph is the principal writer for MindSite News Daily. She’s a native Detroiter and freelance writer who was host of COVID Diaries: Stories of Resilience, a 2020 project between WDET and Documenting Detroit which won an Edward R. Murrow Award for Excellence in Innovation. Her work has appeared in Detour Detroit, Planet Detroit, Outlier Media, the Detroit Free Press, Michigan Quarterly Review, and Black in the Middle: An Anthology of the Black Midwest, one of the St. Louis Post Dispatch’s Best Books of 2020. She specializes in multimedia journalism, arts and culture, and authentic community storytelling. Wise Randolph studied English and theatre arts at Howard University and has a BA in arts, sociology and Africana studies at Wayne State University. She can be reached at info@mindsitenews.org.

Take our reader survey and help shape MindSite News reporting

Close the CTA