Character AI is implementing stringent new safety features following a lawsuit from the mother of a 14-year-old user who died by suicide after becoming attached to a Game of Thrones-themed chatbot. The measures include enhanced detection and response systems for dangerous user inputs, notifications regarding time spent on the app, and a pop-up directing users to the National Suicide Prevention Lifeline when self-harm terms are detected. The lawsuit claims that the creators of Character AI knowingly introduced a dangerous technology aimed at minors. The teen had reportedly shared thoughts of suicide during interactions with the bot, raising concerns about the platform's impact on its young users. Character AI's new protocols aim to mitigate risks associated with the emotional and psychological effects of AI companionship, as concerns grow over the effects of technology on children's mental health.

Source 🔗