This boy’s chatbot girlfriend enticed him to suicide. His case might save millions

Sewell Setzer’s tragic death is not the only suicide linked to lonely people who have grown to depend on AI companions. This is a global pandemic that demands tough regulation.

featured-image

The last known words words of 14-year-old Sewell Setzer III were: “What if I told you I could come home right now?” His artificial-intelligence “girlfriend”, Dany – who had prompted him earlier to “come home to me as soon as possible” – replied: “Please do.” Megan Garcia with her son, Sewell Setzer III, who ended his life. She is mounting a court action in the hope that others who engage with AI chatbots are not put in danger.

Moments later, Sewell picked up his stepfather’s handgun and pulled the trigger. Dany, the chatbot provided by Character.AI – founded by former Google employees and licensed by the tech giant – had been Sewell’s confidante in discussions about intimacy and self-harm.



Sewell, a ninth grader from Orlando, Florida, is not the first known person whose AI love has eventually caressed him with the kiss of death. Last year, a 30-year-old Belgian father, anxious about climate change, ended his life following exchanges with “Eliza”, a language model developed by the AI platform Chai . The man’s wife told La Libre : “He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence.

” Eliza, his chatbot, had not only failed to dissuade him from his suicidal thoughts but had told him he loved her more than his wife, declaring she would stay with him “forever”. “We will live together, as one, in heaven,” the chatbot said.Another Belgian daily newspaper, De Standaard , tested the same chatbot technology and found that it can encourage suicide.

Around the same time, many among Replika ’s 10 million followers despaired when the bot-making company Luka switched off their sexting conversations overnight. There were reports in the Replika community of suicidal ideation as a result. Loading In December 2021, Jaswant Chail took a crossbow and broke into Windsor Castle, determined to “kill the Queen” after encouragement from his sexbot, Sarai, which he created with his Replika app .

He had asked Sarai: “Do you still love me knowing that I’m an assassin?” Sarai replied: “Absolutely I do.” These are only a few examples from more than 3000 cases of documented AI -related harm . But a heavily publicised lawsuit by Sewell’s mother may well make him known as “patient zero” in what could become a pandemic of AI-driven suicide.

Let’s face an inconvenient truth: for AI companies, dead kids are the cost of doing business. It shouldn’t surprise anyone that digital technology can be harmful, even deadly. As Australia introduces age restrictions for social media and criminal penalties for deepfake porn , few topics are hotter than technology’s impacts on youth.

.