Editor’s note: This story discusses sensitive issues involving children, mental health, sexually explicit content and suicide that may be disturbing to some readers. If you or someone you know is having thoughts of suicide, please contact the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).
What began as a congressional hearing Thursday on excessive screen time among children and young adults took a darker turn, as experts warned lawmakers that AI bots can harm children’s mental health and foster unhealthy, or even sexually explicit, emotional relationships.
Dr. Jenny Radesky, an associate professor of pediatrics at the University of Michigan Medical School, said children often turn to AI chatbots during vulnerable moments, raising concerns about emotional dependency and safety.
"Kids are going to AI when they're lonely, when they don't know who to talk to and when they're worried about being judged," explained Radesky during testimony before the Senate committee on Commerce, Science and Transportation.
She said some social media companies have built AI chatbots directly into their user interfaces, which has become one of the primary ways children discover and begin engaging with the technology.
"We need to make sure that families can also opt out of things like an algorithmic feed or having the presence of AI chatbots in products that kids are using," Radesky said, calling for laws that would hold companies accountable for adverse events and enforce strict safety benchmarks.
DEEPFAKE PORN CRACKDOWN PASSES IN SENATE TO ALLOW PEOPLE TO SUE
She said the risks extend well beyond emotional attachment. "The concerns are not just the relational concerns," Radesky said. "There’s also safety — giving bad advice, giving unsafe advice, the sycophancy where you’re leading kids down a rabbit hole of different beliefs and sexual interactions. So we need many guardrails."
The warnings from experts prompted calls for stronger federal oversight. Ranking Member Maria Cantwell, D-Wash., said the dangers tied to AI may pose even greater dangers than social media.
"I think we need to be very loud and clear that the federal government needs to do something on AI," Cantwell said. "You’re here telling us the problems with social media, but you’re basically saying AI is way worse. So it’s time to step up," she added.
NEARLY TWO-THIRDS OF AMERICAN VOTERS BACK SOCIAL MEDIA BAN FOR KIDS UNDER 16, FOX NEWS POLL SHOWS
Dr. Jean Twenge, a professor of psychology at San Diego State University, echoed those concerns, warning about the rise of so-called "AI boyfriends and girlfriends" and sexually explicit chat apps. She urged lawmakers to establish a minimum age of 16 for social media and either 16 or 18 for AI companion apps.
"We don’t want 12-year-olds having their first romantic relationship with a chatbot," Twenge said, also calling for strict guardrails on tools like ChatGPT and other research-focused AI to prevent harmful conversations, some of which, she noted, have already been linked to tragic suicides.
Experts warned that without swift action, children could continue to encounter AI systems that shape their emotions, relationships and beliefs with little oversight or protection.
Senate Commerce Committee Chair Ted Cruz, R-Texas, said the challenge for parents is growing as children spend more of their day online. "Kids need time to be kids to experience the real world, not get lost in a virtual one," he said.
Cruz said parents are already struggling to keep children safe in an increasingly digital world, citing research showing children ages 8 to 12 spend an average of 5.5 hours a day on screens, while teens average more than eight hours — more than half their waking day.
"Kids need time to be kids — to experience the real world, not get lost in a virtual one," he said.
.png)
1 hour ago
1















English (US)