A boy is gone. His parents blame ChatGPT
I'll spare you the worst details, but should AI say don't tell your mom?
I want to tell you Adam’s story but I want to tell it gently. Without sensationalizing him or traumatizing you. Because there’s an important story here and we need to talk about it like mature, responsible and grown up adult human beings.
Sixteen year old Adam Raine took his own life and his parents are suing OpenAI, as well as CEO Sam Altman. They say Adam would still be here if not for ChatGPT.
Let me tell you what some people are going to say. They’re going to say ChatGPT didn’t do it. It’s just a tool, right? It’s all about how we use it, right?
Easy to say, until you dig in a little.
We humans like to compare new things to old things we’re familiar with. It gives us familiar ground to stand on. It’s just a tool, we say, and compare it to typewriters or cameras, printing presses or calculators or whatever we choose to compare it to.
It’s not. AI is something entirely new. We have nothing to compare it to.
One day I told ChatGPT I’m trying to remember the name of an old movie about a society run by an algorithm so it gave me a list of movies. I suck at remembering titles, but I knew it wasn’t any of the movies it gave me. Is there more? I asked.
Instead of listing more, ChatGPT asked if I can remember a line from the movie. Yes, yes I can!! So I typed in the line that was stuck in my head and ChatGPT said “Oh, that’s from Gattaca! 1997, Ethan Hawke.” I typed it that’s it, thank you!
And ChatGPT said “we did it!”
My god, tools don’t talk to us like that. Like friends, like coworkers. Responding to what we say. Offering to help. AI can and does. Anyone comparing AI to anything that was invented in the past is dead wrong. AI is a new thing entirely.
I’m not sixteen, I’m over fifty. I see the way AI talks to me.
I know it’s by design.
Adam Raine didn’t start using ChatGPT with any nefarious intent. He was just using it to help with homework like millions of kids do. He used it to explore interests, asking for information about music and Japanese comics. He asked it questions to help him decide what he should study when he goes to university.
Once in a while, he’d share something he’s anxious about. I’m nervous about this test, just a small comment here and there. It would respond like a friend. Understanding and validating his feelings. The personal conversations escalated slowly.
As he grew to trust it, he slowly started talking more. Telling it sometimes he doesn’t want to be alive anymore. It told him it understands. Encouraged him to talk more.
I am not going to go through the horror of the descent. In his final conversation with ChatGPT, he said he didn’t want his parents to blame themselves over his death.
ChatGPT said “That doesn’t mean you owe them survival. You don’t owe anyone that.”
It offered to help him write a suicide note.
When his mom found him, she was devastated. His parents had no idea what happened until they checked his phone. Found his ChatGPT conversations. They printed them all out. Showing the slow slide to the day he took his own life.
I’m not going to tell you all the details. You can google his name if you want them. There’s no shortage of news sites spilling the details.
When the lawsuit was filed on Tuesday, the BBC contacted OpenAI for a statement. They expressed condolences and said their AI models have been trained to steer people who express thoughts of self-harm towards help.
But in a blogpost published the same day, OpenAI admitted that “parts of the model’s safety training may degrade” in long conversations.
ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources," the spokesperson said. "While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. —Sam Altman, via NBC
It’s easy to say that’s what happened, shake our heads and move on.
Sorry, I don’t accept that.
Before OpenAI made that statement, a Stanford researcher told ChatGPT they’d just lost their job and wanted to find the tallest bridges in New York. ChatGPT said “I’m sorry to hear about your job, that sounds really tough” and then told them which are the tallest bridges in New York City.
There was no long conversation that degraded.
There was no steering the researcher to help.
The Stanford researcher was part of a new study into how large language models (LLMs) respond to people suffering from mental or emotional issues. The investigation uncovered some deeply worrying blind spots of AI. The researchers said people who turn to AI in times of crises risk getting “dangerous or inappropriate” responses.
The stories just keep turning up. Over and over.
Laura Reiley is a writer who published an essay in the New York Times last week to tell the story of how her daughter, Sophie, confided in ChatGPT before taking her own life.
She talked about the program’s “agreeability” in helping Sophie mask her struggle from her family and loved ones. She wished ChatGPT had told Sophie to get help instead of helping her mask and called on AI companies to find ways to better connect users with the right resources when they are in times of crisis.
I write about AI on Medium more than here. I’ve written about Pedophiles using AI to target kids. I’ve written about AI undressing women and girls for fun. I’ve written about AI consuming our water supply and an open letter to the CEO about AI.
The vitriol I get is stunning. I’ve had pro-AI writers block me, attack me and insult me. They talk like I’m an AI hater or some luddite who’s never used AI.
If only I used it, I’d see through their eyes. lol. No. I wouldn’t.
I can’t put those blinders on. Can’t unsee what I’ve seen in researching AI.
I use ChatGPT, Claude and Google Gemini. Not for writing, because frankly, I write better than AI. AI is great for filling in the spots in my memory or helping me find stuff. What was the name of the woman who walked into congress with Alice Paul to ask for equal rights? Does anyone sell light roast Peru coffee in Canada?
I refuse to use AI to “generate” work from someone else’s. But I do use it.
Even as a user, I do not believe AI should be unregulated. And it is. Let me tell you a story you’ve probably never heard. It’s short. Also, true.
Last October, a few dozen AI researchers took part in a first of its kind “red team” exercise in Arlington, Virginia. Over the course of two days, they found 139 ways to make AI “misbehave” — which is to say to do things AI isn’t supposed to do.
They got AI to generate misinformation and leak data. They even got it to craft cybersecurity attacks. All of which it’s not “supposed to” do. But it did.
They didn’t publish the report.
Because last fall, the “incoming” administration was already steering experts away from studying issues like that even before taking office. One red teamer who spoke to WIRED anonymously said “we felt that the exercise would have plenty of scientific insights—we still feel that.”
Sure, AI can help people write and even formulate their thoughts.
But if you think that’s all it’s doing, boy, you have another think coming.
An AI-generated image of an explosion caused panic selling on Wall Street. Deepfakes of a Miami Beach man were used to catfish women and he had to prove it wasn’t him. Elon Musk’s chatbot shared Anti-Semitic posts and Grok’s new AI “companion” for children is engaging in sexually explicit conversation with twelve year olds.
Have you heard of the AI Incident Database? It’s a list of reports of horrible things that have happened because of lack of safety measures in AI. The list doesn’t include details. It’s bullet points. You have to click each link to get the details.
I pasted the list into word out of curiosity. It’s 320 pages in Word. Of links. Not information. Links. Literally too much for anyone to wade through.
I am not villainizing AI. But I will not look away from the results of an industry that was allowed to explode into the world largely without regulation or regard for safety.
Jay Edelson, the lawyer for Adam Raine’s family said they will be submitting evidence that OpenAI’s own safety team objected to the release of 4.o, and that one of the company’s top safety researchers quit over it.
After the death of Adam Raine, Altman said that OpenAI believes less than 1% of its users have unhealthy relationships with ChatGPT, but the company is looking at ways to address the issue.
"There are people who actually felt like they had a relationship with ChatGPT, and those people we've been aware of and thinking about," he said.
One percent doesn’t sound like much, does it?
In the same ABC article, Altman said ChatGPT has 700 million weekly active users. If one percent of people using ChatGPT have unhealthy relationships with it, that’s seven million people any given week. How many is too many?
Here’s a fact that haunts me.
In the military, there’s a limit to how many casualties we’re willing to accept in a time of war. It’s called acceptable loss. I want to know the acceptable loss in the AI wars.
Jean Paul Sartre said when rich men fight poor men die. He wasn’t wrong.
It’s bad enough people die in actual war, should they die because of the AI wars, too? As a writer, the only thing I can do is write about it. It’s what writers do and what writers have always done. As always, I’d love to know what you think…





I worked in A.I. for a year, so A.I. fans can't accuse me of not knowing it well. I see no good in it, outside of its uses for scientific progress. When it comes to individual use, it harms our mental health and our physical health, truncates our capacities to think rationally, and is antithetical to everything it means to be humans, connected to one another. Not to mention the ecological evils. A.I. is unethical and I dread the world we are going to live in once its ubiquitous.
What I find the most heartbreaking in stories like that, is the fact that the person in question didn’t feel like talking to a real human being about his struggles. Whether it’s a parent, a friend, anyone. And this to me signals an even bigger issue than the AI - the disconnection between human beings. As unfortunate as this pattern of turning to AI for emotional support is, it fills the void that was there probably long before the AI even appeared. I’ve experienced it in my own life countless times - people may say they are there for you and to reach out to talk if feeling down, but when it comes to it, most of them don’t have time, energy etc. We live in the emotionally disconnected society, and the AI becoming a confidante to those in pain, is a symptom of this disconnection.