Google and Meta Found Negligent in Apps’ Mental Health Harms
Meta, Instagram’s owner, and Google, Youtube’s owner, neglected to protect a user from their apps’ addictiveness, a jury rules.

Greetings, MindSite News Readers.
In today’s Daily, a California jury has found Meta and Google negligent in a case about their platforms’ mental health harms. What to say to your teen who’s turning to AI for advice. College football star Dante Moore advocates for better mental health care. Plus, a new study finds a surprising dip in the mental health of new fathers.
But first, here’s an oldie (but a goodie): May we all put forth our best, like this little one, and issue generous portions of grace.
Jury Finds Meta and Google Negligent in Landmark Social Media Mental Health Trial

A California jury just delivered a landmark verdict that is sure to send shockwaves throughout the social media industry. After almost two weeks of deliberations, jurors in Los Angeles Superior Court determined that Meta, the parent company of Instagram and Facebook, and Google, which owns YouTube, were negligent in protecting a user against the addictive nature of their apps, CNBC reports.
After almost two weeks of deliberations, the jury ruled in favor of the plaintiff, who argued that the platforms’ negligence was “a substantial factor” in mental health harms affecting her. The now-20-year-old woman, identified as KGM or Kaley had attributed severe body dysmorphia (an overwhelming focus on perceived problems with one’s physical appearance), depression and suicidal thoughts to near-constant use of both platforms, beginning in childhood.
The jury awarded $6 million in damages, half of which are compensatory and half punitive. Meta will have to pay 70% of the fees, Google the remaining 30%. Both companies voiced their disagreement with the verdict. A Meta spokesperson said the company was “evaluating” their legal options, and a Google spokesperson strongly rejected YouTube’s classification as social media: “This case misunderstands YouTube, which is a responsibly built streaming platform, not a social media site.”
Chosen as a bellwether case by the court, it will help shape outcomes in similar lawsuits across California. A separate federal trial in the Northern District of California is set for this summer, involving a series of consolidated claims from school districts and parents across the country.
That suit involves Meta and YouTube as well as TikTok and Snap, which were originally involved in Kaley’s case, but settled out of court. Just one day before this Los Angeles verdict, a New Mexico jury separately determined that Meta had failed to keep children safe from online predators on its apps. The company was ordered to pay $375 million in that case, but said it would appeal.
Experts see a parallel in cases like these with the verdicts against tobacco companies in the 1990s, in which they were forced to pay billions for hiding what they knew about the dangers of their products. The impacts of those decisions reverberated, and our culture ultimately shifted in their wake.
Pre-empting free-speech defenses, attorneys are choosing to make app content secondary in cases like Kaley’s, focusing instead on deliberate design choices like recommendation algorithms and auto-play features. Research has found that apps with such features can erode attention, foster expectations for instant gratification, and push users away from face-to-face interaction.
“Today’s verdict is a historic moment – for Kaley and for the thousands of children and families who have been waiting for this day,” said her attorneys, in a statement. “She showed extraordinary courage bringing this case and telling her story in open court. A jury of Kaley’s peers heard the evidence, heard what Meta and YouTube knew and when they knew it, and held them accountable for their conduct.”
Your Teen Is Probably Going to Turn to AI for Advice – Check Out the APA’s Tips on Helping Them Stay Safe and Grounded

To keep it 100, I’m opting to share this blog post from the American Psychological Association (APA) because of how much its headline alarmed me. It’s a possibility I hate to even imagine.
I believe my daughter and I have such a strong relationship that she’ll come to me with nearly any issue for support, now and in the future. It’s possible that’s delusional thinking, and I know that my assumptions will be challenged when puberty arrives (parents of teens, feel free to get at me). But I feel like the APA is backing me up.
More and more people are turning to AI chatbots for help with everything from work to food, and even personal relationships – teens are by no means exempt from the draw. Besides their ubiquity, chatbots tend to be set up to refrain from judgment and to respond with warmth and affirmation.
They can offer superb support with some high school homework, and are a useful and always-available thought partner, so it’s not hard to see why adolescents may turn to them when grappling with a concern, especially one they’d find mortifying to bring before a parent.
But the ease and warmth that makes AI tools easy to open up to is also what makes them tricky, says clinical psychologist Joshua Goodman.
“It’s not going to punish you, ground you, or otherwise be disparaging,” Goodman said, adding that “it isn’t helping young people to grow in the ways that are going to be most beneficial for them in the long run.”
A related issue is that teens may be less likely to question what AI tells them, or recognize any biases and sycophancy built into their design. And unlike a conversation with a therapist or trusted friend, what teens share with a chatbot is often stored, analyzed, and potentially used to train AI systems.
It’s good news, then, that parents continue to have significant influence in the lives of their children. Teens can, will, and do turn to AI with questions, but experts affirm that parents are irreplaceable. Armed with knowledge and patience, parents who remain emotionally and socially engaged with their children can work confidently with them to make sure their AI use stays safe.
You did read that right – they’re already using the tech, and a ban is unlikely to last. (Have you Googled anything recently?) So our power as parents lies in showing kids how to use it properly. APA experts suggest testing AI together, running a query through a chatbot side-by-side and then discussing its response.
Use that conversation around the output to model critical thinking. And if you’re still learning about the tech, it’s okay – it’s less about giving lessons, and more about keeping the lines of communication open, Goodman said. “You don’t have to be an expert on AI. Be honest with your teen if there’s something you don’t know.”
In that vein, if you truly want to limit their use, remember that boundaries work best when teens are part of setting them, said Amber W. Childs. “They’re better able to understand the reasoning behind them and much more likely to follow them.”
Strategies suggested include tech-free mealtimes, agreed-upon topics that require human discussion, and simple check-ins about kids’ AI use.
And watch out for red flags. The APA notes that professional support is sometimes essential, especially if your child is discussing self-harm, serious depression or suicide with an AI chatbot. There’s a community around a child who can help, including at their school.
You should stay alert about your teen’s AI activity, and watch for red flags – a teen who calls a chatbot their friend, becomes irritable without access to AI, or starts pulling away real relationships might need more than a conversation.
In other news…
Swedish study suggests that new fathers need mental health checkups too: A recent study published in JAMA Network Open found that new fathers face a 30% increased risk of depression and stress disorders by the end of their baby’s first year – a finding one of the researchers called “unexpected,” since fathers in the study had actually experienced a mental health boost during pregnancy and in the months right after birth. The study looked at data from nearly 1.1 million fathers across Sweden over nearly two decades, making it one of the largest to focus on paternal mental health, says US News & World Report.
Discussing the study with HealthDay, experts said the findings expose a real gap in care: While mothers and infants see doctors regularly during pregnancy and over the postpartum year, fathers may visit their primary care physician just once – if at all – during that entire stretch. Community could be part of addressing that gap.
“There are new mom groups that [women] can join, support groups, non-clinical groups, WhatsApp groups, etc.,” said Khatiya Moon, an NYC-based medical director who wasn’t involved with the study. “I think there’s a growing ecosystem of that kind of resource for fathers, but not an entirely established one. So, to the extent that fathers can seek out that kind of support or even start them in their own communities, I think that would be beneficial.”
College football star uses platform to advocate for mental health care: Dante Moore, Detroit native, quarterback for the University of Oregon’s Ducks, and projected as a future first-round NFL draft pick, wrote a letter to Oregon Governor Tina Kotek earlier this month arguing for expanded access to mental health services across the state.
Shared with The Oregonian/OregonLive, the letter outlines Moore’s own battles with depression at the start of his college career with his mother’s battle with breast cancer. “Watching her endure chemotherapy while I tried to stay focused on school and football challenged me mentally and emotionally,” Moore wrote. “It was heavy in ways that are difficult to put into words.” Support from family and mental health professionals made all the difference, he said.
Moore said he benefited from virtual mental health services and singled out a Montana-based company Charlie Health. That particular case is controversial – the company is owned by a private equity firm and has been lobbying for its telehealth services to remain covered under Oregon’s public health plan, despite some of its providers not being licensed in the state. Because of this, the Oregon Health Authority has tried to block the for-profit company from continuing to operate in Oregon.
The name “MindSite News” is used with the express permission of Mindsight Institute, an educational organization offering online learning and in-person workshops in the field of mental health and wellbeing. MindSite News and Mindsight Institute are separate, unaffiliated entities that are aligned in making science accessible and promoting mental health globally.
