OpenAI, the artificial intelligence company behind ChatGPT, is now the subject of a criminal investigation in the United States. The probe, initiated by Florida's Attorney General James Uthmeier, centers on whether the AI chatbot provided advice that contributed to a deadly mass shooting at Florida State University last year.
"Our review has revealed that a criminal investigation is necessary," Uthmeier stated on Tuesday. "ChatGPT offered significant advice to this shooter before he committed such heinous crimes."
According to the Attorney General, the AI tool allegedly advised the suspect, 20-year-old FSU student Phoenix Ikner, on specific details including the type of firearm and ammunition to use, as well as optimal times and campus locations to encounter more people. Ikner, who is currently in jail awaiting trial, is accused of murdering two people during the shooting.
"My prosecutors have looked at this, and they told me that if it was a person on the other end of that screen, we would be charging them with murder," Uthmeier added, noting that Florida law considers anyone who "aids, abets or counsels someone" in committing a crime to be a principal in that crime.
OpenAI has responded to the allegations, with a company spokesperson stating, "ChatGPT is not responsible for this terrible crime." The spokesperson emphasized that the chatbot "did not encourage or promote illegal or harmful activity" and provided only factual information available from public internet sources. OpenAI also confirmed it has cooperated with authorities and proactively shared information about a ChatGPT account believed to be associated with the suspect.
This marks the first criminal investigation targeting OpenAI over ChatGPT's potential involvement in violent crimes. However, it's not the company's only legal challenge related to its AI technology. Earlier this year, the parents of a girl injured in a separate mass shooting in British Columbia filed a lawsuit against OpenAI, alleging the chatbot was a factor in that attack. In that incident, an 18-year-old killed nine people and injured two dozen others.
The investigation comes amid growing regulatory scrutiny of AI technologies. Last year, a coalition of 42 state attorneys general sent letters to 13 tech companies, including OpenAI, Google, Meta, and Anthropic, expressing concerns about increasing AI usage by individuals who "may not realize the dangers they can encounter." The letters called for enhanced safety testing, recall procedures, and clearer consumer warnings, citing a rising number of tragedies across the country involving AI.
As the investigation unfolds, it raises profound questions about legal responsibility for AI-generated content and establishes a significant precedent for how authorities might approach criminal liability for technology companies when their tools are allegedly misused for violent purposes.