22.04.2026 19:15
In the USA, it was claimed that ChatGPT technology played a role in the deaths of two people in the mass shooting that occurred last year at Florida State University.
Artificial intelligence has now moved beyond being just a technical term and has become an integral part of daily life and the business world. This technology, which creates a digital revolution in every field from industry to education, health to art with its data processing capabilities and practical solutions, is now on trial for allegedly causing violence through information sharing.
In the US, it is alleged that ChatGPT technology played a role in the deaths of two people in a mass shooting that occurred last year at Florida State University. The Florida Attorney General's Office has launched a criminal investigation into OpenAI, claiming that ChatGPT guided a university attacker regarding weapons and attack plans.
2 KILLED, 6 INJURED IN THE ATTACK
In the attack carried out on the university campus in Tallahassee, Florida last April, two people lost their lives and six were injured. University student Phoenix Ikner, who is alleged to have carried out the attack, was shot and neutralized by police arriving at the scene and later taken to hospital. Ikner faces numerous murder and attempted murder charges.
ALLEGATION THAT "CHATGPT GUIDED THE ARMED ATTACK"
Florida Attorney General James Uthmeier claimed that ChatGPT guided the suspect of last year's armed attack at Florida State University where two people lost their lives. Uthmeier said, "Our review revealed that a criminal investigation is necessary; ChatGPT provided significant advice to this attacker before he committed such heinous crimes."
INFORMATION ON THE MOST CROWDED PLACE AND TIME
Uthmeier, claiming that ChatGPT provided the attacker with information not only about ammunition but also about the timing and location of the attack, said, "ChatGPT told him at what time of day he could interact with more people and which areas on campus were the busiest."
IF THERE WAS A HUMAN ON THE OTHER SIDE OF THE SCREEN, WE WOULD ARREST THEM
Speaking at a press conference, Uthmeier stated, "ChatGPT told the attacker which weapon to use, which ammunition was compatible with which weapon, and whether the weapon would be effective at short range. If there was a human on the other side of the screen, we would charge them with murder."
The Florida Attorney General's Office, in an official letter sent to OpenAI, requested the company's policies on how ChatGPT should respond when users make threats to harm others.
OPENAI: WE ARE NOT RESPONSIBLE FOR THE ATTACK
OpenAI spokesperson Kate Waters denied the allegations, stating, "The mass attack at Florida State University is a tragic situation, but ChatGPT is not responsible for this terrible crime. After the incident, we identified an account believed to be linked to the suspect and proactively shared this information with law enforcement."
Waters also defended that ChatGPT provides information available in public sources and does not encourage illegal or harmful activities.
NOT CHATGPT'S FIRST INCIDENT!
OpenAI is already facing a lawsuit due to another incident in which the chatbot ChatGPT may have played a role. In British Columbia earlier this year, an 18-year-old youth killed 9 people and injured 24 others.
OpenAI admitted that it detected and banned the attacker's account based on usage patterns but did not report the matter to the police. The company also announced that it will strengthen security measures.
ARTIFICIAL INTELLIGENCE UNDER SCRUTINY
Recently, pressure from law enforcement and politicians on OpenAI has been increasing. In the US and Canada, it is argued that some attackers expressed violent intentions in conversations with ChatGPT, and despite this, no precautions were taken. Additionally, families of some individuals who committed suicide have filed lawsuits claiming ChatGPT contributed to the deaths.
These developments have brought to the agenda the debate about to what extent artificial intelligence companies are obligated to monitor chat conversations and report risky content to law enforcement.
OpenAI announced that it has improved ChatGPT's responses to conversations involving risks of self-harm or harm to others and is working on policies to alert law enforcement in some high-risk situations.