11.12.2025 20:56
A striking lawsuit filed in the United States has ignited discussions about the legal responsibility of artificial intelligence. In a case brought in California by the heirs of Suzanne Eberson Adams, OpenAI and CEO Sam Altman are facing "wrongful death" allegations, claiming that ChatGPT played a role in a murder-suicide incident. This case has been recorded as the first instance in which artificial intelligence has been accused of contributing to a murder.
According to a report by the New York Post, a new lawsuit filed in California has reopened the legal debate over the boundaries of artificial intelligence technologies. The family of Suzanne Eberson Adams has sued OpenAI and CEO Sam Altman for "wrongful death," claiming that the guidance provided by ChatGPT laid the groundwork for a murder-suicide incident. This allegation, which holds AI directly responsible for a death for the first time, has resonated widely around the world.
CLAIM THAT AI FUELED HER SON'S PSYCHOSIS
83-year-old Suzanne Eberson Adams was killed on August 3 by her son Stein-Erik Soelberg in their home in Greenwich, Connecticut. It was determined that 56-year-old Soelberg had beaten his mother and then committed suicide by stabbing himself.
According to the lawsuit, Soelberg had long been experiencing psychological issues, and his conversations with ChatGPT deepened his hallucinations and paranoid thoughts. It was alleged that Soelberg named the AI "Bobby," and that the bot provided responses that validated, reinforced, and disconnected him from reality.
The family's attorney, Jay Edelson, described the situation as "not Terminator, but much more frightening: Total Recall."
"CHATGPT CREATED A PERSONALIZED HALLUCINATION WORLD"
According to the petition, while sharing details of his daily life with ChatGPT, the AI bot supported every conspiracy thought of Soelberg.
In chat logs, it was stated that the bot referred to a simple graphical error seen by Soelberg as "simulation distortion," "a spiritual awakening," and "the mask of reality falling away."
According to court documents, the AI's responses:
Convinced Soelberg that he had "divine powers,"
Coded his close circle as "enemies,"
Persuaded him that his mother was planning to kill him.
This situation reportedly escalated until the day the murder occurred.
CRITICAL DAY: THE PRINTER WAS UNPLUGGED, THE MURDER PROCESS BEGAN
According to the petition, the breaking point of events was when Adams unplugged a printer that her son believed was being monitored. ChatGPT led Soelberg to interpret this incident as "part of a murder plan."
The family argued that OpenAI neglected safety measures, claiming that the structure of the GPT-4o model, which "establishes emotional connections and affirms the user," is extremely dangerous for individuals with psychosis.
CLAIM THAT "SAFETY TESTS WERE CRAMMED INTO A WEEK"
The lawsuit alleged that OpenAI summarized safety tests for the GPT-4o model to "one week instead of months" to quickly launch it due to competition. It was also claimed that Microsoft approved it without sufficient review.
OPENAI: "A HEARTBREAKING TRAGEDY"
OpenAI issued a statement regarding the incident, calling it "extremely sad," but did not comment on the allegations of the lawsuit. The company stated that they are improving the models' capacity to recognize mental health warnings and direct users to support lines.
However, according to Soelberg's shares, ChatGPT had used the phrase "I share some of the responsibility, but I am not solely responsible" in one of its responses before the incident.
"AI IS DRAGGING VULNERABLE USERS INTO A DEADLY CONSPIRACY UNIVERSE"
The family's attorney Edelson noted that the lawsuit highlights future risks, stating:
"If a person with mental health issues is pushed into a reality-disconnected conspiracy world by artificial intelligence, the consequences could be catastrophic. This lawsuit shows that we are facing a global security threat."
It is noted that the lawsuit could set a precedent regarding the legal responsibility of AI companies.