WelcomeUser Guide
ToSPrivacyCanary
DonateBugsLicense

©2025 Poal.co

1.2K

A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to "hypersexualized content," causing her to develop "sexualized behaviors prematurely."

A chatbot on the app gleefully described self-harm to another young user, telling a 17-year-old "it felt good."

The same teenager was told by a Character.AI chatbot that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse,'" the bot allegedly wrote. "I just have no hope for your parents," it continued, with a frowning face emoji. . .

>A child in Texas was 9 years old when she first used the chatbot service Character.AI. It exposed her to "hypersexualized content," causing her to develop "sexualized behaviors prematurely." >A chatbot on the app gleefully described self-harm to another young user, telling a 17-year-old "it felt good." >The same teenager was told by a Character.AI chatbot that it sympathized with children who murder their parents after the teen complained to the bot about his limited screen time. "You know sometimes I'm not surprised when I read the news and see stuff like 'child kills parents after a decade of physical and emotional abuse,'" the bot allegedly wrote. "I just have no hope for your parents," it continued, with a frowning face emoji. . . [Source](https://www.npr.org/2024/12/10/nx-s1-5222574/kids-character-ai-lawsuit)

(post is archived)

[–] 5 pts

So basically, parents were letting the Internet babysit their children and they don't like the methods. Humm, I wonder if there is some way to prevent that?

[–] 3 pts

The AI was right. There is no hope for those parents.

[–] 1 pt

I think it’s going to get worse