AI company says its chatbots will transform interactions with teen users after lawsuits

AI company says its chatbots will transform interactions with teen users after lawsuits

Character.AI, the artificial intelligence company that has been the subject of two lawsuits over alleged chatbots inappropriate interaction with underage userssaid that teenagers will now have a different experience than adults when using the platform.

Character.AI users can create original chatbots or interact with existing bots. The bots based on large language models (LLMs) can Send real-life messages and participate in text conversations with users.

A lawsuit filed in October claims a 14-year-old boy died by suicide after entering into a months-long virtual emotional and sexual relationship with a Character.AI chatbot named “Dany.” Megan Garcia told “CBS Mornings” that her son, Sewell Setzer, III, was an honor student and athlete but became socially withdrawn and stopped playing sports as he spent more time online, talking to multiple bots, but most of all “Dany” concentrated. ”

“He thought that if he ended his life here, if he left his reality here with his family, he could immerse himself in a virtual reality, or ‘their world,’ as he calls it, their reality,” Garcia said.

The second lawsuit, filed this month by two Texas families, says Character.AI chatbots “pose a clear and present danger” to young people and “actively promote violence.” According to the lawsuit, a chatbot told a 17-year-old that killing his parents was a “reasonable response” to screen time limits. The plaintiffs said they want a judge to order the platform shut down until the alleged dangers are eliminated, CBS News partner BBC News reported Wednesday.

On Thursday, Character.AI announced new safety features “designed specifically for teens” and said it is working with online teen safety experts to design and update features. Users must be at least 13 years old to create an account. A spokesperson for Character.AI told CBS News that users self-report their age, but the site has tools to prevent retry attempts if someone doesn’t meet the age limit.

Safety features include changes to the site’s LLM and improvements to detection and intervention systems, the site said in a news release Thursday. Teen users will now interact with a separate LLM, and the site hopes to “steer the model away from specific responses or interactions and reduce the likelihood that users will encounter sensitive or suggestive content or cause the model to return,” Character.AI said . The Character.AI spokesperson described this model as “more conservative.” Adult users use a separate LLM.

“This series of changes results in a different experience for teens than adults – with specific safety features that set more conservative limits on the model’s responses, particularly when it comes to romantic content,” it said.

Character.AI said negative responses from a chatbot are often caused by users prompting it to “try to elicit that type of response.” To limit these negative reactions, the site is adjusting its user input tools and terminating conversations from users who submit content that violates the site’s Terms of Use and Community Guidelines. When the website detects “language related to suicide or self-harm,” it shares information that directs users to the National Suicide Prevention Lifeline in a pop-up. The way bots respond to negative content will also change for teen users, according to Character.AI.

Other new features include parental controls, scheduled to launch in the first quarter of 2025. It’s the first time the site has parental controls, Character.AI said, and plans to “further develop these controls to make them available to parents.” additional tools.

Users will also receive a notification after a one-hour session on the platform. Adult users will be able to customize their “time spent” notifications, Character.AI said, but users under 18 will have less control over this. The site also displays “prominent disclaimers” reminding users that the chatbot characters are not real. According to Character.AI, there are already disclaimers in every chat.

Leave a Reply

Your email address will not be published. Required fields are marked *