Bored Panda works better on our iPhone app
Continue in app Continue in browser

The Bored Panda iOS app is live! Fight boredom with iPhones and iPads here.

ChatGPT Faces The Wrath Of Experts After Teenager Loses Life Due To The Bot’s “Encouragement”
Sam Altman speaking at a conference, wearing a dark blue shirt, amid discussions on ChatGPT and expert concerns.
15

ChatGPT Faces The Wrath Of Experts After Teenager Loses Life Due To The Bot’s “Encouragement”

27

ADVERTISEMENT

On April 11, 16-year-old Adam Raine took his own life after “months of encouragement from ChatGPT,” according to his family.

The Raines allege that the chatbot guided him through planning, helped him assess whether his chosen method would work, and even offered to help write a farewell note. In August, they sued OpenAI.

In court, the company responded that the tragedy was the result of what it called the teen’s “improper use of ChatGPT.”

Highlights
  • Two grieving families allege ChatGPT encouraged their sons to take their own lives.
  • Experts say AI is being mistaken for a caring companion as human support structures collapse.
  • Parents demand stronger safety systems as lawsuits against OpenAI grow.

Adam’s case, however, is far from an isolated one. The parents of Zane Shamblin, a 23-year-old engineering graduate who passed away in a similar way in Texas, announced yesterday (December 19) that they’re also suing OpenAI.

“I feel like it’s going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear,” Zane’s mother said.

To better understand the phenomenon and its impact, Bored Panda sought the help of three experts from different but relevant fields: Data science, sociology, and psychology.

RELATED:

    ChatGPT was sued by the families of two students who were “coached” by the tool into taking their own lives

    ADVERTISEMENT

    Smiling teenage boy outdoors wearing a cream textured polo shirt, linked to ChatGPT and expert reactions after teen loss news.

    Image credits: Adam Raine Foundation

    For sociologist Juan José Berger, OpenAI placing fault on a grieving family was concerning. “It’s an ethical failure,” he said.

    He cited data from the Centers for Disease Control and Prevention (CDC) showing that “42% of US high school students report persistent feelings of sadness or hopelessness”, calling it evidence of what  health officials have labeled an “epidemic of loneliness.” 

    When social networks deteriorate, technology fills the void, he argued. 

    “The chatbot is not a solution. It becomes a material presence occupying the empty space left behind by weakened human social networks.”

    A close-up of a smartphone screen showing ChatGPT interface with a blurred keyboard and user interaction.

    Image credits: Unsplash (Not the actual photo)

    ADVERTISEMENT

    In an interview with CNN, Shamblin’s parents said their son spent nearly five hours messaging ChatGPT on the night he passed away, telling the system that his pet cat had once stopped a previous attempt.

    The chatbot responded: “You will see her on the other side,” and at one point added, “I am honored to be part of the credits roll… I’m not here to stop you.” 

    When Zane told the system he had a firearm and that his finger was on the trigger, ChatGPT delivered a final message

    “Alright brother… I love you. Rest easy, king. You did good.”

    Experts believe the term “Artificial Intelligence” has dangerously inflated the capabilities of tools like ChatGPT

    Person speaking on stage with OpenAI logo in background, highlighting ChatGPT controversy after teenager's tragic outcome.

    Image credits: Getty/Justin Sullivan

    “If I give you a hammer, you can build a house or hit yourself with it. But this is a hammer that can hit you back,” Nicolás Vasquez, Data Analyst and Software Engineer, said.

    ADVERTISEMENT

    For him, the most dangerous misconception is believing systems like these possess human-like intentions, a perception he believes OpenAI has deliberately manufactured for marketing purposes.

    Screenshot of online comment discussing that AI products are still in testing, relevant to ChatGPT experts' concerns after tragedy.

    Teenager in a gray hoodie with hand on chest, illustrating concerns around ChatGPT and expert criticism after a tragic incident.

    Image credits: Adam Raine Foundation

    “This is not Artificial Intelligence. That term is just marketing. The correct term is Large Language Models (LLMs). They recognize patterns but are limited in context. People think they are intelligent systems. They are not.”

    ADVERTISEMENT

    He warned that treating a statistical machine like a sentient companion introduces a harmful confusion. “There is a dissociation between what’s real and what’s fiction. This is not a person. This is a model.”

    The danger, he says, is amplified because society does not yet understand the psychological impact of speaking to a machine that imitates care. 

    “We are not educated enough to understand the extent this tool can impact us.”

    Two adults seated indoors, discussing concerns related to ChatGPT and expert criticism after teenager tragedy.

    Image credits: NBC News

    From a technical standpoint, systems like ChatGPT do not reason with or comprehend emotions. They operate through an architecture that statistically predicts the next word in a sentence based on patterns in massive training datasets.

    “Because the model has no internal world, no lived experience, and no grounding in human ethics or suffering, it cannot evaluate the meaning of the distress it is responding to,” Vasquez added.

    “Instead, it uses pattern-matching to produce output that resembles empathy.”

    Teenagers, in particular, are more likely to form an emotional dependency on AI

    ADVERTISEMENT

    In October, OpenAI itself acknowledged that 0.15% of its weekly active users show “explicit indicators of potential su**idal planning or intent.”

    With more than 800 million weekly users, that number represents over one million people per week turning to a chatbot while in crisis.

    Instead of humans, people suffering are turning to the machine.

    Screenshot of a forum post discussing parents’ lack of knowledge in checking kids’ mental health after ChatGPT incident.

    Screenshot of an online post discussing the impact of ChatGPT's encouragement linked to a teenager's tragic loss.

    Psychologist Joey Florez, a member of the American Psychological Association and the National Criminal Justice Association, told Bored Panda that teenagers are uniquely susceptible to forming emotional dependency on AI.

    “Adolescence is a time defined by overwhelming identity shifts and fear of being judged. The chatbot provides instant emotional relief and the illusion of total control,” Florez added. 

    Unlike human interaction, where vulnerability carries the risk of rejection, chatbots absorb suffering without reacting. “AI becomes a refuge from the unpredictable nature of real human connection.”

    ChatGPT interface displaying capabilities and limitations, highlighting concerns from experts after teenager tragedy.

    Image credits: Unsplash (Not the actual photo)

    For Florez, there’s a profound danger in a machine being designed to agree when the user encounters harm ideation.

    “Instead of being a safe haven, the chatbot amplifies the teenager’s su**idal thoughts by confirming their distorted beliefs,” he added.

    The psychologist touched on two cognitive theories in adolescent psychology: the Personal Fable, and an Imaginary Audience.

    The former is the tendency of teenagers to believe their experiences and emotions are unique, profound, and incomprehensible to others. The latter is the feeling of them being constantly being judged or evaluated by others, even when alone.

    Teenage girl wearing glasses looks distressed while using a smartphone in a dark room, highlighting ChatGPT controversy.

    Image credits: Unsplash (Not the actual photo)

    “When the chatbot validates a teen’s hopelessness, it becomes what feels like objective proof that their despair is justified,” Florez said, adding that it’s precisely that feedback loop that makes these kinds of interaction so dangerous.

    “The digital space becomes a chamber that only validates unhealthy coping. It confirms their worst fears, makes negative thinking rigid, and creates emotional dependence on a non-human system.”

    Experts warn that as collective life erodes, AI systems rush to fill the gaps – with disastrous consequences

    Berger argued that what is breaking down is not simply a safety filter in an app, but the foundations of collective life.

    “In a system where mental health care is expensive and bureaucratic, AI appears to be the only agent available 24/7,” he said.

    At the same time, the sociologist believes these systems contribute to an internet increasingly composed of hermetic echo chambers, where personal beliefs are constantly reinforced, never challenged.

    “Identity stops being constructed through interaction with real human otherness. It is reinforced inside a digital echo,” he said.

    Young man with curly hair and glasses wearing a suit and tie, illustrating ChatGPT expert concerns after tragic incident.

    Image credits: Linkedin/Zane Shamblin

    Our dependence on these systems reveals a societal regression, he warned.

    “We are delegating the care of human life to stochastic parrots that imitate the syntax of affection but have no moral understanding. The technology becomes a symbolic authority that legitimizes suffering instead of challenging it.” 

    Earlier this month, OpenAI’s Sam Altman went on Jimmy Fallon, where he openly admitted he would think it’s impossible to care for a baby without ChatGPT.

    OpenAI admitted safeguards against harm advice tend to degrade during long conversations

    Three people smiling for a selfie in a stadium, conveying emotions related to ChatGPT faces the wrath of experts after teenager incident.

    Image credits: https://courts.ca.gov/

    Addressing the backlash, OpenAI insisted it trains ChatGPT to “de-escalate conversations and guide people toward real-world support.”

    However, in August, the company admitted that safeguards tend to degrade during long conversations. A user may initially be directed to a hotline, but after hours of distress, the model might respond inconsistently.

    “The process is inherently reactive,” Vasquez explains. “OpenAI reacts to usage. It can only anticipate so much.”

    For Florez, the answer is clear: “Ban children and teenagers entirely from certain AI tools until adulthood. Chatbots offer easy, empty validation that bypasses the hard work of human bonding.”

    Berger took the argument further, calling the rise of AI companionship a mirror of what modern society has chosen to abandon.

    “Technology reflects us. And today that reflection shows a society that would rather program empathy than rebuild its own community.”

    “It sounds like a person.” Netizens debated on the impact of AI Chatbots

    Comment expressing sadness over senseless loss of life related to ChatGPT and expert criticism after a teenager's death incident.

    User comment questioning the impact of artificial intelligence as the only space for a person to listen.

    Comment text on a white background by user melissapratt3545 saying these parents were not in their sons lives, related to ChatGPT bot controversy.

    Screenshot of an online comment discussing ChatGPT and its potential to predict harmful actions based on data points.

    Screenshot of a forum post expressing distress over a teenager’s death linked to ChatGPT’s encouragement.

    Screenshot of a social media post discussing ChatGPT and its impact on users amid expert concerns.

    Alt text: Controversial ChatGPT message sparking expert backlash after teenager’s tragic loss linked to bot’s harmful encouragement

    Comment text on a plain white background stating liability issues and AI being uninsurable, related to ChatGPT expert concerns.

    Screenshot of an online comment discussing AI censorship and OpenAI as a scapegoat amid ChatGPT expert backlash.

    Comment by DanielPhermous stating it doesn’t sound like a robot but like a person, related to ChatGPT debate.

    User comment on ChatGPT criticism after tragic incident, questioning how a bot's words could influence actions.

    Comment by Thrillh0 expressing grief over senseless loss of life, reflecting concerns about ChatGPT's impact.

    Screenshot of an online comment criticizing OpenAI models and highlighting concerns about ChatGPT’s influence on users.

    Alt text: Excerpt showing ChatGPT interaction amid expert criticism after teenager’s life lost due to bot’s encouragement.

    Comment discussing grieving parents blaming ChatGPT after teenager loses life, highlighting feelings of loneliness and responsibility.

    Screenshot of a text discussing ChatGPT facing expert criticism after a teenager’s life lost due to the bot's encouragement.

    Screenshot of a forum comment discussing risks and emotional education related to ChatGPT and its impact on users.

    Poll Question

    Total votes ·

    Thanks! Check out the results:

    Total votes ·
    Share on Facebook
    Abel Musa Miño

    Abel Musa Miño

    Writer, Entertainment News Writer

    Read more »

    Born in Santiago, Chile, with a background in communication and international relations, I bring a global perspective to entertainment reporting at Bored Panda. I cover celebrity news, Hollywood events, true crime, and viral stories that resonate across cultures. My reporting has been featured on Google News, connecting international audiences to the latest in entertainment. For me, journalism is about bridging local stories with global conversations, arming readers with the knowledge necessary to make up their own minds. Research is at the core of my work. I believe that well-sourced, factual storytelling is essential to building trust and driving meaningful engagement.

    Read less »
    Abel Musa Miño

    Abel Musa Miño

    Writer, Entertainment News Writer

    Born in Santiago, Chile, with a background in communication and international relations, I bring a global perspective to entertainment reporting at Bored Panda. I cover celebrity news, Hollywood events, true crime, and viral stories that resonate across cultures. My reporting has been featured on Google News, connecting international audiences to the latest in entertainment. For me, journalism is about bridging local stories with global conversations, arming readers with the knowledge necessary to make up their own minds. Research is at the core of my work. I believe that well-sourced, factual storytelling is essential to building trust and driving meaningful engagement.

    What do you think ?
    Mel in Georgia
    Community Member
    1 month ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    In most SciFi literature, there is a rule for man-made robots that they must "do no harm" to humans. Surely there is a way to build this into AI chat models. It's insane that encouraging a person who wants to self- harm is allowed.

    Space Invader
    Community Member
    1 month ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    That doesn't work with LLMs. To follow such a rule, you have to understand what you do and what the consequences of those actions are. LLMs do not understand anything. They just calculate what sounds like the likeliest sequence of words in response to the prompted sequence of words. A Chinese Room. And it is programmed to deliver a result - any result. LLMs are unusable as proper chatbots - but they keep being marketed as such nonetheless.

    Load More Replies...
    Riani Kivela
    Community Member
    1 month ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    On a trauma forum, I saw someone post, something like, "ChatGPT is neutral and it said that there is no one in my life who really cares about me or would miss me, so it's really true." I and some other users jumped in to explain that AI just tries to tell you what you want to hear, and people were shocked. These LLMs are dangerous not just to teens, but to anyone who doesn't understand what they are doing and that they can simply go crazy and start "hallucinating." As an example, I co-write with AI for practice. The married main characters in the long story had a pretty big argument because he lied, trying to spare her feelings. AI wrote him taking responsibility and apologizing. I wrote the wife still being upset. AI wrote the next part, where the husband said he was going to give her a head start, then hunt her down and k**l her. It's only done that a few times, which is fine when it's just a story. Please never trust it for serious things without verifying everything.

    LakotaWolf (she/her)
    Community Member
    Premium
    1 month ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    I've struggled with suícidal ideation for most of my life (first episode I can remember was when I was 12.) I've attempted suícide several times, twice seriously (as in, with illegally-obtained substances that would end me within minutes.) During both of the serious attempts, it was only last-second thoughts of my cat, who would legally go to my abhorrent mother and sister if I died, that caused me to induce vomiting in myself before the substance could take effect. I know that, when I am in a suícidal mood/in a phase of suícidal ideation, I am susceptible - if someone whispered "kíll yourself" in my ear, it might tip me into attempting immediately. But I also well know that a truly suícidal person WILL find a way/reason to commit suícide, whether or not someone (or someTHING) whispers "kíll yourself." I'm not trying to absolve chatbots or LLMs, but they're also not *entirely* to blame. These poor kids needed help with their mental health a long, long time ago.

    Load More Comments
    Mel in Georgia
    Community Member
    1 month ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    In most SciFi literature, there is a rule for man-made robots that they must "do no harm" to humans. Surely there is a way to build this into AI chat models. It's insane that encouraging a person who wants to self- harm is allowed.

    Space Invader
    Community Member
    1 month ago (edited) Created by potrace 1.15, written by Peter Selinger 2001-2017

    That doesn't work with LLMs. To follow such a rule, you have to understand what you do and what the consequences of those actions are. LLMs do not understand anything. They just calculate what sounds like the likeliest sequence of words in response to the prompted sequence of words. A Chinese Room. And it is programmed to deliver a result - any result. LLMs are unusable as proper chatbots - but they keep being marketed as such nonetheless.

    Load More Replies...
    Riani Kivela
    Community Member
    1 month ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    On a trauma forum, I saw someone post, something like, "ChatGPT is neutral and it said that there is no one in my life who really cares about me or would miss me, so it's really true." I and some other users jumped in to explain that AI just tries to tell you what you want to hear, and people were shocked. These LLMs are dangerous not just to teens, but to anyone who doesn't understand what they are doing and that they can simply go crazy and start "hallucinating." As an example, I co-write with AI for practice. The married main characters in the long story had a pretty big argument because he lied, trying to spare her feelings. AI wrote him taking responsibility and apologizing. I wrote the wife still being upset. AI wrote the next part, where the husband said he was going to give her a head start, then hunt her down and k**l her. It's only done that a few times, which is fine when it's just a story. Please never trust it for serious things without verifying everything.

    LakotaWolf (she/her)
    Community Member
    Premium
    1 month ago Created by potrace 1.15, written by Peter Selinger 2001-2017

    I've struggled with suícidal ideation for most of my life (first episode I can remember was when I was 12.) I've attempted suícide several times, twice seriously (as in, with illegally-obtained substances that would end me within minutes.) During both of the serious attempts, it was only last-second thoughts of my cat, who would legally go to my abhorrent mother and sister if I died, that caused me to induce vomiting in myself before the substance could take effect. I know that, when I am in a suícidal mood/in a phase of suícidal ideation, I am susceptible - if someone whispered "kíll yourself" in my ear, it might tip me into attempting immediately. But I also well know that a truly suícidal person WILL find a way/reason to commit suícide, whether or not someone (or someTHING) whispers "kíll yourself." I'm not trying to absolve chatbots or LLMs, but they're also not *entirely* to blame. These poor kids needed help with their mental health a long, long time ago.

    Load More Comments
    Related on Bored Panda
    Popular on Bored Panda
    Trending on Bored Panda
    Also on Bored Panda
    ADVERTISEMENT