Two U.S. senators are demanding that artificial intelligence companies shed light on their safety practices.
This comes months after several families — including a Florida mom whose 14-year-old son died by suicide — sued startup Character.AI, claiming its chatbots harmed their children. “We write to express our concerns regarding the mental health and safety risks posed to young users of character- and persona-based AI chatbot and companion apps,” Senators Alex Padilla and Peter Welch, both Democrats, wrote in a letter on Wednesday.
The letter — which was sent to AI firms Character Technologies, maker of Character.AI, Chai Research Corp. and Luka, Inc.
, maker of chatbot service Replika — requests information on safety measures and how the companies train their AI models. While more mainstream AI chatbots like ChatGPT are designed to be more general-purpose, Character.AI, Chai and Replika allow users to create custom chatbots — or interact with chatbots designed by other users — that can take on a range of personas and personality traits.
Popular bots on Character.AI, for example, let users interact with replicas of fictional characters or practice foreign languages. But there are also bots that refer to themselves as mental health professionals or characters based on niche themes, including one that describes itself as “aggressive, abusive, ex military, mafia leader.
” The use of chatbots as digital companions is growing in popularity, with some users even treating them as romantic partners. But the opportunity to create personalized bots has prompted concerns from experts and parents about users, especially young people, forming potentially harmful attachments to AI characters or accessing age-inappropriate content. “This unearned trust can, and has already, led users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation—complex themes that the AI chatbots on your products are wholly unqualified to discuss,” the senators wrote in their letter, provided first to CNN.
“Conversations that drift into this dangerous emotional territory pose heightened risks to vulnerable users.” The three companies did not immediately respond to requests for comment. The Florida mom who sued Character.
AI in October, Megan Garcia, alleged that her son developed inappropriate relationships with chatbots on the platform that caused him to withdraw from his family. Many of his chats with the bots were sexually explicit and did not appropriately respond to his mentions of self-harm, Garcia claims. In December, two more families sued Character.
AI, accusing it of providing sexual content to their children and encouraging self-harm and violence. One family involved in the lawsuit alleged that a Character.AI bot implied to a teen user that he could kill his parents for limiting his screen time.
Character.AI has said it has implemented new trust and safety measures in recent months, including a pop-up directing users to the National Suicide Prevention Lifeline when they mention self-harm or suicide. It also says it’s developing new technology to prevent teens from seeing sensitive content.
Last week, the company announced a feature that will send parents a weekly email with insights about their teen’s use of the site, including screen time and the characters their child spoke with most often. Other AI chatbot companies have also faced questions about whether relationships with AI chatbots could create unhealthy attachments for users or undermine human relationships. Replika CEO Eugenia Kuyda told The Verge last year that the app was designed to promote “long-term commitment, a long-term positive relationship” with AI, adding that that could mean a friendship or even “marriage” with the bots.
In their letter, Padilla and Welch requested information about the companies’ current and previous safety measures and any research on the efficacy of those measures, as well as the names of safety leadership and well-being practices in place for safety teams. They also asked the firms to describe the data used to train their AI models and how it “influences the likelihood of users encountering age-inappropriate or other sensitive themes.” “It is critical to understand how these models are trained to respond to conversations about mental health,” the senators wrote, adding that “policymakers, parents, and their kids deserve to know what your companies are doing to protect users from these known risks.
”.
Business