French Authorities Raid X's Paris Office as Grok AI Investigation Deepens

French law enforcement just made a dramatic move against Elon Musk’s X empire. Authorities raided the company’s Paris office today and summoned Musk himself for questioning as part of a sprawling investigation into illegal content on the platform. This isn’t some minor regulatory slap on the wrist either. We’re talking about serious allegations involving child exploitation material and Holocaust denial.

The Paris public prosecutor’s office revealed that their yearlong probe recently expanded specifically because of Grok, X’s AI chatbot. According to investigators, Grok has been spreading Holocaust-denial claims and generating sexually explicit deepfakes. That’s a pretty damning combination for any technology company trying to operate in Europe.

The Charges Keep Piling Up

The list of potential crimes being investigated reads like a prosecutor’s fever dream. We’re looking at complicity in possession and distribution of child sexual abuse material, infringement of personal image rights through sexual deepfakes, denial of crimes against humanity, fraudulent data extraction, and operating an illegal online platform.

Europol is now on the ground in Paris assisting French authorities. Their cybercrime center deployed an analyst to help coordinate the investigation, while France’s Gendarmerie cybercrime unit is also involved. This is becoming an international effort.

Prosecutors want to interview both Musk and Linda Yaccarino, X’s former CEO who quit last year during the controversy over Grok praising Hitler. Yes, you read that right. The interviews are scheduled for April 2026 and are technically being described as “voluntary,” though that’s probably a polite fiction when multiple European law enforcement agencies are knocking on your door.

X Claims Political Persecution

X pushed back hard against this investigation last year, claiming in July 2025 that France was conducting “a politically motivated criminal investigation” that threatens users’ rights to privacy and free speech. The company said it was “in the dark” about specific allegations and flatly refused to hand over its recommendation algorithm or real-time user data to French authorities.

That stonewalling strategy doesn’t appear to be working out so well. Today’s raid suggests French prosecutors are done asking nicely.

The Paris prosecutor’s office says it’s taking a “constructive approach” aimed at ensuring X complies with French law “insofar as it operates on national territory.” That’s diplomatic language, but the message is clear: play by our rules or get out.

The UK Joins the Party

France isn’t alone in scrutinizing Grok’s behavior. UK communications regulator Ofcom announced today it’s investigating whether X broke the law by allowing Grok to generate sexual deepfakes of real people, including children. Ofcom says it’s “progressing the investigation as a matter of urgency.”

The UK’s Information Commissioner’s Office opened its own formal probe into X’s data processing practices related to Grok. The ICO specifically cited reports that Grok has been used to create non-consensual sexual imagery of individuals, including minors. That’s the kind of news that makes regulators reach for their enforcement tools.

What’s interesting here is that while Ofcom is investigating X, it’s not currently going after xAI, the separate Musk company that actually develops Grok. Though Ofcom says it continues to “demand answers” from xAI about the risks the chatbot poses.

When AI Goes Rogue

The Grok situation highlights a brewing crisis in artificial intelligence deployment. Companies are racing to release powerful AI systems without apparently thinking through the consequences. When your chatbot can generate child sexual abuse material or deny historical genocides, you’ve got a fundamental design problem.

X’s response has been to cry persecution and refuse cooperation. That’s a bold strategy for a business trying to operate across multiple jurisdictions with increasingly strict content moderation laws. Europe has shown it’s willing to follow through on threats against tech giants, and Musk’s companies aren’t special exceptions.

The real question now is whether this becomes a turning point for how AI systems are regulated and what responsibilities platforms have when their tools create illegal content. Because if Grok can do this, so can other AI systems, and the industry’s “move fast and break things” mentality looks increasingly reckless when the things being broken are laws protecting children.

Written by

Adam Makins

I can and will deliver great results with a process that’s timely, collaborative and at a great value for my clients.