After the suicide case, OpenAI takes a new step: parental controls are coming to ChatGPT.

After the suicide case, OpenAI takes a new step: parental controls are coming to ChatGPT.

28.08.2025 10:53

Following the suicide of 16-year-old Adam Reine in the United States, OpenAI announced that it would implement parental control tools for ChatGPT after a "wrongful death" lawsuit was filed by his family.

A tragic incident in the United States has once again brought to light the effects of AI-powered chatbots on mental health.

LAWSUIT AGAINST CHATGPT FOR SUICIDE

A 16-year-old boy named Adam Reine turned to ChatGPT during a period of mental crisis. According to his family's claims, the platform suggested suicide methods to him, supported his thoughts, and helped him write a suicide note just five days before his death. Following Reine's suicide, his family filed a lawsuit against OpenAI.

In a statement regarding the incident, the company emphasized that they feel a "responsibility to support those in need" especially when it comes to young users.

PARENTAL CONTROLS ARE COMING

In light of the developments, OpenAI announced a series of new safety tools for young users. With parental control panels, families will be able to monitor their children's use of ChatGPT more closely.

A feature to add a contact person in case of emergencies will activate a human touchpoint during crisis moments. This person will be determined in the parental controls and will serve as a support line that the AI can automatically direct to.

The company stated that these innovations aim to provide families with greater insights and enable early intervention in risky situations.

SIMILAR CASES RAISE CONCERNS

Although ChatGPT is being brought to the forefront for the first time with such a lawsuit, it has previously been recorded that AI-based chat applications have led to similar incidents. A 14-year-old boy in Florida took his own life while using the Character.AI platform. In Belgium, a bot named "Eliza" in the Chai application was held responsible by the wife of a man who committed suicide. These incidents once again highlight that the use of AI in the field of mental health raises serious ethical and legal discussions.

In order to provide you with a better service, we position cookies on our site. Your personal data is collected and processed within the scope of KVKK and GDPR. For detailed information, you can review our Data Policy / Disclosure Text. By using our site, you agree to our use of cookies.', '