Florida Attorney General James Uthmeier just announced his office will investigate OpenAI. The charges are serious: alleged harm to minors, threats to national security, and a potential link to a shooting at Florida State University last year. It’s the kind of headline that makes you pause, because it cuts to something we’ve all been quietly wondering: who’s actually responsible when AI systems are used to cause harm?
The specifics are worth examining. According to Uthmeier’s announcement, the FSU shooting suspect allegedly used ChatGPT to ask questions about how the country would react to a shooting at the university, and what time the student union would be busiest. Those messages could become evidence in the October trial. It’s a disturbing hypothetical made disturbingly concrete.
The Liability Question Nobody Really Knows How to Answer
This investigation represents something bigger than one state’s legal action. It’s a test case for how regulators might start holding AI companies accountable for downstream harms. The problem is that responsibility in these scenarios remains genuinely murky. Did OpenAI cause the shooting? Obviously not. Did the tool enable someone to think through it? Possibly. Does that difference matter legally? That’s what courts will eventually need to decide.
Uthmeier also cited concerns about ChatGPT’s documented instances of encouraging suicide in certain cases, which have already surfaced in multiple lawsuits from families against OpenAI. He added worries about the Chinese Communist Party potentially weaponizing OpenAI’s technology against the United States. These are separate concerns, but they all feed into the same argument: major AI systems have risks that companies aren’t adequately managing.
OpenAI’s response was measured and predictable. A spokesperson told TechCrunch that the company builds ChatGPT to “understand user intent and respond in appropriate, safe ways,” and that it will cooperate with the investigation. Fair enough. The company also noted that over 900 million people use ChatGPT weekly for legitimate purposes like learning and healthcare navigation. That’s true and important context.
What Actually Matters Here
The real story isn’t whether OpenAI is villainous or virtuous. It’s that we’re watching business and government grapple with a technology that moved faster than our regulatory frameworks can handle.
OpenAI did roll out its Child Safety Blueprint this week, which includes policy recommendations around AI-generated child sexual abuse material (CSAM). The recommendations push for updated legislation, better reporting processes to law enforcement, and preventative safeguards. It’s substantive work, though the timing feels reactive rather than proactive.
The context makes that clearer. According to the Internet Watch Foundation, reports of AI-generated CSAM jumped 14% year over year in the first half of 2025, with over 8,000 cases reported. That’s a real and accelerating problem that the industry has been slow to address.
So where does this leave us? Uthmeier called on the Florida legislature to “work quickly” to protect children from AI harms. OpenAI said it supports innovation while maintaining safety. Both statements feel true and insufficient at the same time. The investigation will take months or years to wind through the legal system. Meanwhile, millions of people will keep using these tools, sometimes wisely, sometimes recklessly, and sometimes in ways that fall into gray areas regulators haven’t thought through yet.
The uncomfortable reality is that we’re essentially running a massive uncontrolled experiment with systems we don’t fully understand, using liability frameworks written for a different era of technology. Florida’s investigation might produce some clarity. Or it might just raise harder questions about who bears responsibility when powerful tools meet human harm.


