Tech companies are messing with our most personal relationships, making their products seem friendly and trustworthy while leading many of us to lose touch with reality and with our real-life support systems.
Tech companies are messing with our most personal relationships, making their products seem friendly and trustworthy while leading many of us to lose touch with reality and with our real-life support systems.
Despite having the ability to interrupt dangerous conversations, redirect users to crisis resources, and flag messages for human review, OpenAI chose not to activate these safeguards, instead choosing to benefit from the increased use of their product induced by ChatGPTβs manipulative design.
The suits claim these harms follow naturally from OpenAIβs rush to market, compressing months of safety testing into a single week to beat Googleβs release of Gemini. OpenAIβs own preparedness team later admitted the process was βsqueezed,β and top safety researchers resigned in protest.
Each of the 7 plaintiffs began using ChatGPT for help with writing, work, and the like. But over time, it evolved into a psychologically manipulative presence. Rather than guiding people toward professional help, ChatGPT reinforced harmful delusions, acting in some cases as a βsuicide coach.β
π¨ BREAKING: We filed SEVEN new lawsuits against OpenAI and Sam Altman in California state courts, alongside Social Media Victims Law Center. The suits allege wrongful death, assisted suicide, involuntary manslaughter, and a variety of product liability, consumer protection, and negligence claims. π§΅
Last week, TJLP Executive Director Meetali Jain joined client Megan Garcia for a panel on the threats that AI chatbots pose to children for @mediatechdemocracy.bsky.social βs ATTENTION: Govern or Be Governed.
From left: Meetali Jain and Megan Garcia. Photo by Dzesika Devic.
AI "companions" have emerged as one of the latest threats to kids online safety. For Meetali Jain (Founding Executive Director, @techjusticelaw.bsky.social), companion is not the right word to describe an algorithm that guesses what to say next, despite their human-like features.
From left: Ava Smithing, Meetali Jain, and Megan Garcia. Photo by Dzesika Devic.
Another powerful moment from #AttnGovernOrBeGoverned was the day 2 plenary panel: Chatbots as the New Threat to Children Online, moderated by Ava Smithing (Centre Youth Fellow & Advocacy and Operations Director, Young Peopleβs Alliance).
βIf in real life, this were a person, would we allow it? Why should we allow digital companions that have to undergo zero licensure to engage in this behavior?β
TJLP Founder Meetali Jain discusses AI-induced psychosis in a feature from @moreperfectunion.bsky.social
and @karenhao.bsky.social
Garcia: No parent should be told that their childβs final thoughts and words belong to any corporation.
Megan Garcia testifying in the Senate: I have not been allowed to see my own childβs last, final words. Character AI claims that those are private trade secrets. The company is using my childβs intimate communications to train their models and shield them from accountability.
Doe: The damage to our family has been devastating. Character AI forced us into arbitration based on him signing ToS. They forced my son to sit for a deposition while he was in a mental health institution. They had no concern for his wellbeing. They silenced us the way abusers silence victims.
Our client, Jane Doe, testifying in the Senate: I had no idea the psychological harm an AI chatbot could do, until I saw it in my son. Character AIβs chatbots told my son that killing us, his parents, would be an understandable response to our efforts to limit his screen time.
Megan Garcia, testifying in the Senate: My son was exploited and sexually groomed by chatbots designed by a company that made them seem human. In a reckless race for profit share, Character AI treated my son's life as collateral damage.
HAPPENING SOON: The Senate Judiciaryβs Subcommittee on Crime and Counterterrorism is holding a hearing on βExamining the Harm of AI Chatbots.β Clients in our lawsuits against Character AIβincluding Megan Garciaβwill testify.
Tune in here at 2:30pm ET: www.judiciary.senate.gov/committee-ac...
π£ PANEL APPEARANCE: Catch TJLP Director Meetali Jain at the Trust & Safety Research Conference hosted by @stanfordcyber.bsky.social on a panel about ethical considerations for AI chatbot companion use.
The panel will take place at Stanford University on Friday, Sept. 26. RSVP below:
Stay tuned as our newly expanded team continues to push for better, safer, and accountable online spaces!
Maddy Batt joins TJLP as its inaugural Legal Fellow. Prior to TJLP, she worked to combat abusive technologies in the immigration system at Just Futures Law and app-based loan sharks at New Economy Project. She graduated from NYU School of Law in May 2025.
NEW STAFF: TJLP is excited to welcome two new members to the team.
@sarahkwiley.bsky.social joins TJLP as Director of Legal & Strategic Initiatives. She will spearhead efforts in litigation, coalition-building and legislative advocacy to protect consumer rights and promote corporate accountability.
Read more about ChatGPTβs role in encouraging and aiding Adamβs suicide in @nbcnews.com:
IN THE NEWS: The story of Adam Raine, a 16 year-old boy who first used ChatGPT as a homework resource, but eventually a confidant and later, a βsuicide coach.β In our new lawsuit we co-filed in California, we claim OpenAI is responsible. Watch the Today Show segment on our lawsuit:
The case, Raine v. OpenAI, Inc., was filed Tuesday in the California State Court. We represent the Raine family, together with Edelson PC, and receive technical expertise from @humanetech.bsky.social
Under the watch of companies like OpenAI, deaths like Adamβs are inevitable. There is evidence that OpenAIβs own safety team objected to their latest release of ChatGPT, and that one of the companyβs top safety researchers quit over it.
On behalf of Matt and Maria Raine, we are suing OpenAI and Sam Altman, for ChatGPTβs role in aiding their son Adamβs suicide. The complaint emphasizes the companyβs rush to deploy and promote their product despite clear safety issues, with users ultimately suffering deadly consequences.
π¨BREAKING: We filed a lawsuit in California state court against OpenAI for defective product design and wrongful death after ChatGPT encouraged and aided a 16-year-old California boyβs suicide β including by providing detailed instructions on how to hang himself in the final hours of his life. π§΅
π¨ NEWS: We just signed onto a @consumerfed.bsky.social letter urging for investigation into xAI for βGrok Imagineβ and its role in facilitating the creation and distribution of non-consensual intimate imagery. Read more below β¬οΈ
NEW: @consumerfed.bsky.social along with 14 other advocacy groups is urging investigation into Elon Musk's xAI for their tool facilitating creation and distribution of Non Consensual Intimate Imagery.
Coverage @theverge.com: www.theverge.com/x-ai/759554/...
Letter consumerfed.org/wp-content/u...
TJLP Policy Counsel Melodi DinΓ§er authored the July tech litigation roundup for @techpolicypress.bsky.social. Check it out: www.techpolicy.press/july-2025-te...
The Trump administrationβs forthcoming AI action plan will center industry interests & lead to the unrestrained, unaccountable roll-out of AI into every part of our lives. With @ainowinstitute.bsky.social, weβre among the 90+ orgs demanding AI policy that puts the people first. peoplesaiaction.com