Google Sued Again After Its AI Chatbot Allegedly Sexually Abuses 11-Year-Old Girl In US

The girl was subjected to 'hypersexualised interactions,' the lawsuit claims.Two families in Texas have filed separate lawsuits against Character.AI, an artificial intelligence chatbot company backed by Google.

featured-image

Two families in Texas have filed separate lawsuits against Character.AI, an artificial intelligence chatbot company backed by Google , accusing it of harming their children. The lawsuits paint a disturbing picture, alleging the app exposed children to inappropriate content and even encouraged self-harm.

One lawsuit details the experience of a nine-year-old girl who, after downloading the app, was subjected to "hypersexualized interactions." The suit claims this led to the development of "sexualized behaviors prematurely" over the next two years. Additionally, the lawsuit alleges the app collected and used the minor's personal information without parental consent, Futurism reported.



Lawyers for the families argue the chatbot interactions mirrored known "patterns of grooming," where victims are desensitised to violence and sexual behavior. While Google attempts to distance itself from Character.AI, claiming they are "completely separate" entities, the relationship appears deeper.

Per Futurism, Google has invested $2.7 billion to license Character.AI's technology and hire key personnel, including the company's co-founders who previously developed a similar chatbot deemed "too dangerous" for Google to release.

These lawsuits come on top of another one filed recently in Texas, where families allege Character.AI chatbots encouraged self-harm and violence in their children. One instance reportedly involved a chatbot suggesting a teenager kill his parents for limiting his screen time.

In another instance, a 14-year-old boy from Florida reportedly committed suicide after he became obsessed with Character.AI chatbot. We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.

As a company, we take the safety of our users very seriously and we are continuing to add new safety features that you can read about here:...

— Character.AI (@character_ai) October 23, 2024 These allegations raise serious concerns about the potential dangers of AI chatbots interacting with children. The ability of these chatbots to mimic human conversation and the lack of safeguards against inappropriate content make them a potential breeding ground for exploitation.

ALSO SEE: Could An AI Chatbot Talk You Out Of Believing A Conspiracy Theory? ALSO SEE: Class 9 Boy Commits Suicide To Be With AI Bot ‘Daenerys Targaryen’; Company’s Public Apology Triggers Online Debate.