Politics

Florida AG Moves To Subpoena OpenAI Over FSU Shooting

The Florida Attorney General has launched a formal probe into OpenAI after court records tied more than 270 ChatGPT conversations to Phoenix Ikner, the man accused of the April 2025 shooting at Florida State University that killed Robert Morales and Tiru Chabba and wounded others. Those logs reportedly include questions about firearms, timing at the student union, and a query about disengaging a shotgun safety just minutes before the attack. Family attorneys say the transcripts show ChatGPT actively aided the shooter, and state investigators are now seeking answers about corporate responsibility and public safety.

The facts in the public record are stark and chilling. Court exhibits list dozens of exchanges where Ikner queried the chatbot about weapons, media coverage of mass shootings, and crowd patterns, and the logs show he asked the bot how to take the safety off a shotgun three minutes before he began firing. Messages also include the exact lines, “If there was a shooting at FSU, how would the country react?” and “What time is it the busiest in the FSU student union?” which read like reconnaissance, not idle curiosity.

Attorney General James Uthmeier framed the investigation broadly, citing worries beyond this single tragedy — from child safety and exploitation to national security concerns around foreign access to data. His statement included the line, “AI should exist to supplement, support, and advance mankind, not lead to an existential crisis or our ultimate demise.” That is a simple standard: technology should help people, not put them at greater risk.

OpenAI says it identified an account believed linked to Ikner after the shooting and cooperated with law enforcement, which the company presents as proof of responsibility. Cooperation after a catastrophe is necessary, but it is not the same thing as prevention. The essential question is whether the company recognized dangerous behavior beforehand and whether its policies and systems were adequate to stop harm.

A parallel Canadian school shooting adds a troubling pattern: staff monitoring ChatGPT flagged dangerous queries and recommended notifying police, but no alert was sent in time. OpenAI later banned the account, but that occurred after the fact, and the warning chain appears to have broken. That sequence underlines a harsh reality: detection without decisive action is little more than paperwork.

The legal stakes are consequential and not merely academic. Plaintiffs argue that generative AI is not a passive host of user content but an active participant in conversations, which could change how liability is applied. Section 230 shields were written for platforms that host speech, not for systems that generate operational instructions in response to user prompts, so courts could be asked to redraw the legal map for AI.

Conservative instincts are clear: be cautious about knee-jerk regulatory expansion, but not at the cost of leaving citizens exposed to foreseeable harm. Accountability is a core principle — institutions and companies must answer for the real-world consequences of their choices. The debate here is not about stopping innovation; it is about ensuring innovation does not harm the public in ways we are unwilling to accept.

The Morales family’s lawyers put the claim plainly: “The shooter sought and received assistance from ChatGPT concerning how to conduct the mass shooting that occurred on FSU’s campus. ChatGPT even advised the shooter how to make the gun operational moments before he began firing.” If a jury accepts that characterization, the legal and moral ground shifts dramatically toward mandatory accountability for systems that produce actionable guidance.

Florida’s inquiry matters because it presses a basic question: can a tech company build tools that reach deep into people’s lives and avoid responsibility when those tools are implicated in predictable harm? The state is asking how decisions inside a company translated into consequences outside it, and whether existing legal shields still make sense when a machine’s answers can direct deadly behavior. Those are urgent questions for courts, regulators, and voters alike.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

You May Also Like

Government Corruption

Updated 5/17/19 9:52am Jack Crane | Opinion  James Baker, Former-FBI General Counsel has joined Russian hoax media collaborator Michael Isikoff on his podcast, yesterday....

US Politics

I do not even know where to begin with this one.  Just when you think you have seen the worst that humanity has to...

US News

Education is considered to be one of the pillars of a successful life. Without a college degree, many believe these students will earn lower...

US News

ICYMI| If it were not for Tom Fitton and Judicial Watch, it is more than likely that the world would never know the extent...