Elon Musk’s X has made a formal commitment to review reports of suspected illegal hate and terrorist content within 24 hours on average, under an agreement with the UK’s media regulator Ofcom. On the surface, it sounds like a win for digital safety. But listen closely to what the experts are actually saying, and you’ll hear something more cautious.
According to BBC reporting, the company will assess at least 85% of flagged content within 48 hours and submit performance data to Ofcom quarterly for a year. Oliver Griffiths, Ofcom’s online safety director, called the commitments a “step forward”—which is telling language. Not a breakthrough. A step.
The timing matters. These pledges come after a series of religiously-motivated attacks targeting Jewish communities in the UK, including incidents at Heaton Park Synagogue in Manchester and locations across London. The problem isn’t abstract. It’s tangible, violent, and recurring.
Why This Matters More Than You Might Think
Ofcom launched a compliance programme last December specifically to investigate whether the largest social media platforms have adequate systems for handling reports of illegal hate and terror material. The regulator has evidence that such content persists across major platforms, and X was clearly called out as needing improvement.
The commitments themselves address a real gap. Ofcom noted that some organizations had flagged “multiple pieces” of suspected illegal content to X but were left uncertain whether their reports had even been received or acted upon. Imagine that frustration: you report dangerous material multiple times and get nothing but silence in return.
X has now promised two additional commitments. First, it will work with experts to improve its reporting systems for illegal hate and terrorist content. Second, it will restrict UK access to accounts operated by or on behalf of proscribed terrorist organisations if they’re posting UK illegal terrorist content.
The Scepticism is Built In
Danny Stone, chief executive of the Antisemitism Policy Trust, framed his response tellingly: “It’s a good start.” Then he added the knife twist. “X is failing in so many regards to tackle open racism on its platform.”
He’s not wrong. The platform has become a magnet for coordinated hate speech campaigns. The commitments here are narrow and specific—focused on compliance with Ofcom’s review process, not on the broader cultural problem of racism festering in plain sight.
Iman Atta, director of Tell Mama, which records anti-Muslim incidents in the UK, made a similar point more diplomatically: “The test is not what is promised, but what is delivered.”
That’s the real question hanging over all of this. X has made promises before. Every major platform has. The difference between a genuine commitment to safety and performative gesture often comes down to mundane details: whether the review process is actually staffed and resourced, whether the 24-hour target becomes an excuse for rushed decisions, whether companies actually follow through when no one’s watching.
A Systemic Problem Isn’t Solved by Process Improvements
Here’s what’s important to understand: procedural commitments like these can only do so much. A 24-hour review window doesn’t prevent content from spreading virally in the first six hours. An 85% compliance rate means 15% of reports still slip through. And even if X perfectly executes this agreement, it addresses only one platform’s approach to one category of harm.
The broader issue—that hate speech and extremist recruitment have found fertile ground on social Technology platforms—remains unsolved. Ofcom can regulate response times. It can’t regulate culture.
What’s encouraging is that the regulator is actually pushing back. Griffiths explicitly said Ofcom was “challenging the platforms to tackle the issue and take firmer action.” That suggests this isn’t the end of scrutiny. It’s the beginning of a different kind of pressure.
Whether X will treat this as a serious commitment or a compliance checkbox to tick and forget remains, as Atta said, a question of delivery. The fact that so many advocates are hedging their praise with caution tells you everything you need to know about the trust deficit here.


