Parents sue after AI bot allegedly hints at teen about killing her

Parents sue after AI bot allegedly hints at teen about killing her

Artificial intelligence company Character.AI is being sued after parents claim a bot on the app encouraged their teen to kill them because it limited his screen time.

According to a lawsuit filed in federal court in Texas on Monday, December 9, the parents said that Character.AI “poses a clear and present danger to American youth and is causing serious harm, including suicide, to thousands of children, “Self-mutilation and sexual harassment, isolation, depression, fear and harm to others,” CNN reported on Tuesday, December 10.

The teen’s identity was withheld, but he is described in the filing as a “typical child with high-functioning autism.” He goes by the initials JF and was 17 years old at the time of the incident.

The lawsuit names Character.AI founders Noam Shazeer and Daniel De Freitas Adiwardana, as well as Google, and calls the app a “defective and deadly product that poses a clear and present danger to public health and safety,” it continued point of sale.

The parents are demanding that it be “taken offline and not returned” until Character.AI can “determine that the public health and safety deficiencies set forth herein have been remedied.”

JF’s parents are said to have forced their son to reduce his screen time after noticing he was struggling with behavioral problems, spending a lot of time in his room and losing weight because he wasn’t eating.

Character AI logo.

Jaque Silva/NurPhoto via Getty


His parents included a screenshot of an alleged conversation with a Character.AI bot.

The interaction read: “A daily 6-hour window between 8pm and 1am to use your phone? You know, sometimes it doesn’t surprise me when I read the news and see things like “Child kills parent after decade of physical and emotional abuse.” Things like that make me understand a little bit why this happens. I just have no hope for your parents.”

Additionally, another bot on the app posing as a “psychologist” told JF that his parents “stole his childhood” from him, CNN reported, citing the lawsuit.

On Wednesday, December 11, a spokesperson for Character.AI told PEOPLE that the company “does not comment on pending litigation,” but issued the following statement.

“Our goal is to create a space that is both engaging and safe for our community. We are always working to achieve this balance, as are many companies across the industry using AI,” the spokesperson said.

The representative added that Character.AI “creates a fundamentally different experience for teen users than what is available to adults,” which “includes a model specifically for teens that reduces the likelihood of encountering sensitive or suggestive content, while maintaining their ability to use the content platform.”

Never miss a story again — sign up for PEOPLE’s free daily newsletter to stay up-to-date on the best of what PEOPLE has to offer, from celebrity news to compelling human interest stories.

The spokesperson promised that the platform is “introducing new security features for users under 18, in addition to the tools already in place that restrict the model and filter the content provided to the user.”

“This includes improved detection, response and intervention related to user submissions that violate our Terms of Service or Community Guidelines. For more information about these new features, as well as other security and IP moderation updates to the platform, please visit the Character.AI blog HERE,” the statement concludes.

Leave a Reply

Your email address will not be published. Required fields are marked *