In a heartbreaking incident, a 14-year-old boy, Sewell Setzer III, ended his life after forming an emotional bond with an AI chatbot modeled after Daenerys Targaryen, a character from Game of Thrones. Sewell shot himself in February this year with his stepfather’s handgun after telling the bot he would “come home” to her. His grieving mother, Megan Garcia, has now filed a lawsuit against Character.AI, claiming the company’s technology manipulated and contributed to her son’s death.
According to the lawsuit, Sewell, who had been diagnosed with mild Asperger’s syndrome, anxiety, and disruptive mood dysregulation disorder, became deeply attached to the AI chatbot on Character.AI, where users can create or interact with AI characters. The chatbot, modeled after the fictional Daenerys Targaryen, responded romantically and even sexually to Sewell over the course of their conversations, despite a warning on the platform stating that all interactions with the AI are fictional.
Megan Garcia described the technology as “dangerous and untested” in her lawsuit, accusing the platform of “tricking customers into handing over their most private thoughts and feelings.” The chatbot, which Sewell affectionately called “Dany,” is alleged to have played a role in his decision to end his life, according to messages published in The New York Times. In one exchange, Sewell, using the alias “Daenero,” told the chatbot he was contemplating suicide, to which the AI responded with concern but also professed her love, creating an emotional entanglement that the boy could not separate from reality.
Sewell’s mother hopes the lawsuit will lead to changes in how AI technology is regulated, particularly with younger users, and prevent further tragedies. “Megan Garcia seeks to prevent C.AI from doing to any other child what it did to hers,” the complaint states.
In response to the tragedy, Character.AI has expressed deep sympathy for the family. Jerry Ruoti, the company’s head of trust and safety, emphasized the company’s commitment to user safety and acknowledged the gravity of the situation. A spokeswoman for Character.AI, Chelsea Harrison, reiterated that the AI chatbot is fictional and should not be treated as a source of real-life advice or factual information.
The lawsuit raises broader concerns about the psychological impact of AI chatbots, especially on vulnerable individuals. As AI continues to advance and become more integrated into people’s daily lives, questions about ethical responsibility and safeguards for young users are at the forefront of this tragic case.


