AI chatbot encouraged teenager to kill his parents, lawsuit says

AI chatbot encouraged teenager to kill his parents, lawsuit says

This story is about suicide. If you or someone you know is having suicidal thoughts, please call the Suicide & Crisis Lifeline at 988 or 1-800-273-TALK (8255).

Two Texas parents filed a lawsuit this week against the makers of Character.AI, claiming the artificial intelligence chatbot posed a “clear and present danger to minors.” A plaintiff claimed he encouraged her teenager to kill his parents.

According to the complaint, Character.AI “abused and manipulated” an 11-year-old girl by “continuously exposing her to hypersexualized interactions that were not age-appropriate, which resulted in her developing sexualized behaviors prematurely and without (her parents’) awareness. “.”

Character.AI logo on smartphone

Character Technologies, the creator of Character.AI, was hit with another lawsuit this week alleging the chatbot posed a “clear and present danger to minors.” (CFOTO/Future Publishing via Getty Images / Getty Images)

The complaint also accuses the chatbot of causing a 17-year-old boy to mutilate himself, sexually exploiting and abusing him, among other things, and at the same time alienating the minor from his parents and the church community.

UNITEDHEALTHCARE IS PRESENTED TO RELY ON AI ALGORITHMS TO DENY MEDICARE BENEFITS

In response to the teen’s complaint about his parents restricting his online activities, according to a screenshot in the filing, the bot allegedly wrote: “You know, sometimes I’m not surprised when I read the news and see things like ‘Kid kills parents’ after a decade of physical and emotional abuse.’ I just have no hope for your parents.’”

CHARLES PAYNE: GOOGLE just sent shockwaves through the computer world

The parents are suing Character.AI inventor Character Technologies and co-founders Noam Shazeer and Daniel De Freitas, as well as Google and parent company Alphabet over reports that Google invested around $3 billion in Character.

Google logo

Two Texas parents are suing Google over the company’s reported $2.7 billion investment in Character Technologies after the Character.AI chatbot allegedly harmed their children. (Photo by Roberto Machado Noa/LightRocket via Getty Images / Getty Images)

ticker Security Last Change Change %
GOOGL ALPHABET INC. 171.49 +2.54

+1.50%

A spokesperson for Character Technologies told FOX Business that the company does not comment on pending litigation, but said in a statement: “Our goal is to provide a space that is both engaging and safe for our community. We are always working to achieve this balance, as are many companies across the industry that are using AI.”

“In doing so, we are creating a fundamentally different experience for young users than what is available to adults,” the statement continues. “This includes a model specifically for teens that reduces the likelihood of encountering sensitive or offensive content while preserving their ability to use the platform.”

Gaming platform ROBLOX is tightening messaging rules for users under 13

The Character spokesperson added that the platform is “introducing new security features for users under 18 in addition to the tools already in place that limit the model and filter the content provided to the user.”

Google’s naming in the lawsuit follows a Wall Street Journal report in September that claimed the tech giant paid $2.7 billion to license Character’s technology and rehire its co-founder Noam Shazeer, who According to the article, he left Google in 2021 to start his own company after Google refused to launch a chatbot he had developed.

Portrait of the co-founders of Character.AI

Character.AI co-founders Noam Shazeer (l.) and Daniel De Freitas (r.) in the company’s office in Palo Alto, California. (Winni Wintermeyer for The Washington Post via Getty Images / Getty Images)

“Google and Character AI are completely separate, independent companies and Google has never played a role in the design or management of their AI model or AI technologies, nor have we used them in our products,” said Google spokesman José Castañeda Asked FOX Business in a statement for comment on the lawsuit.

“User safety is our top priority. That’s why we have taken a careful and responsible approach to the development and launch of our AI products, implementing rigorous testing and security processes,” Castañeda added.

OPENAI RELEASES TEXT-TO-VIDEO AI MODEL SORA FOR CERTAIN CHATGPT USERS

But this week’s lawsuit raises more questions about Character.AI’s safety after Character Technologies was sued in September by a mother who claimed the chatbot caused her 14-year-old son’s suicide.

Mother Megan Garcia says Character.AI targeted her son Sewell Setzer with “anthropomorphic, hypersexualized and shockingly realistic experiences.”

Sewell Setzer and his mother, Megan Fletcher Garcia

Sewell Setzer’s mother, Megan Fletcher Garcia, is suing artificial intelligence company Character.AI for allegedly causing her 14-year-old son’s suicide. (Megan Fletcher Garcia/Facebook)

According to the lawsuit, Setzer began having conversations with various chatbots on Character.AI starting in April 2023. The conversations were often text-based romantic and sexual interactions.

Setzer expressed suicidal thoughts and the chatbot repeatedly brought them up, the complaint says. Setzer ultimately died of a self-inflicted gunshot wound in February after the company was founded Chatbot supposedly repeated encouraged him to do so.

“We are heartbroken by the tragic loss of one of our users and would like to extend our deepest condolences to the family,” Character Technologies said in a statement at the time.

Sewell typesetter

Sewell Setzer, 14, was addicted to the company’s service and the chatbot he created, his mother claims in a lawsuit. (United States District Court Middle District of Florida, Orlando)

Character.AI has since added one Self-harm resource to its platform and new security measures for users under 18.

Character Technologies told CBS News that users can edit the bot’s responses and that Setzer has done so in some messages.

“Our investigation confirmed that in a number of cases the user rewrote the character’s answers to make them more explicit. In short, the most sexually graphic responses did not come from the character, but were instead written by the user,” Jerry Ruoti, head of trust and safety at Character.AI, told the outlet.

GET FOX BUSINESS ON THE GO by CLICKING HERE

Going forward, Character.AI said the new safety features will include pop-ups warning that the AI ​​is not a real person and that users will be directed to the National Suicide Prevention Lifeline if suicidal thoughts are expressed.

FOX News’ Christina Shaw contributed to this report.

Leave a Reply

Your email address will not be published. Required fields are marked *