X has just agreed to review suspected illegal hate and terrorist content flagged through its reporting tool within 24 hours on average, according to commitments accepted by Ofcom. The pledge comes as the UK regulator steps up pressure on social media giants to tackle online harm, particularly following a spike in religiously-motivated attacks targeting Jewish communities.
On the surface, it sounds reasonable. A major platform committing to faster content moderation. But dig deeper and you’ll find a more complicated picture, one where promises and actual delivery remain worlds apart.
What’s Actually Changing
The Elon Musk-owned company has committed to several specific targets. Beyond the 24-hour average for reviews, X will aim to assess at least 85% of flagged content within 48 hours. It’s also promised to engage with external experts about improving its reporting systems for illegal hate and terror material. There’s also a commitment to block UK access to accounts operated by or on behalf of proscribed terrorist organisations if they post UK illegal terrorist content.
These aren’t trivial commitments. Ofcom’s online safety director Oliver Griffiths described them as a “step forward,” particularly given the context of recent violence. The regulator has evidence that terrorist content and illegal hate speech continue to persist on some of the largest social media platforms, and it’s clearly tired of polite requests.
X will submit performance data to Ofcom every three months for a year, theoretically giving the regulator teeth to monitor compliance.
The Skepticism Is Warranted
Here’s where things get thorny. Danny Stone, chief executive of the Antisemitism Policy Trust, called the action a “good start” but emphasized there’s “still more to do.” More tellingly, he said X is “failing in so many regards to tackle open racism on its platform.”
That’s not a ringing endorsement. It’s cautious optimism wrapped around genuine concern.
Iman Atta, director of Tell Mama, a national project recording anti-Muslim incidents in the UK, welcomed the updated targets but made a crucial point: “the test is not what is promised, but what is delivered.” She’s right. Platforms have a long history of committing to better moderation, only to fall short when the cameras move on.
The fact that Ofcom felt compelled to launch a separate investigation into X’s AI tool Grok over concerns it was being used to create sexualised images suggests the platform’s problems extend beyond hate speech. These aren’t isolated incidents; they’re symptoms of systemic challenges in content moderation at scale.
The Bigger Picture
According to BBC reporting, the UK has seen a troubling surge in attacks targeting Jewish communities, including the Heaton Park Synagogue attack in Manchester last October, an attack in Golders Green in April, and recent arson attempts on Jewish sites in London. This isn’t abstract. The pressure on X comes from real violence, real fear, and real communities seeking protection.
Yet there’s also a thornier question lurking here. How much can a single platform realistically moderate when its very business model incentivizes engagement, sometimes at the expense of safety? A 24-hour review window sounds impressive until you realise that’s still 24 hours during which harmful content remains live, potentially spreading and radicalising.
The Technology industry’s relationship with regulation remains contentious. Platforms often treat compliance as a box to tick rather than a genuine commitment to change. Whether X will be different remains to be seen.
What We’re Watching
The three-month reporting cycles will be telling. If X meets its targets consistently, that suggests real operational change. If performance deteriorates after the initial spotlight fades, well, we’ve seen that movie before.
There’s also the question of whether 24 hours is actually fast enough in an age where viral content can reach millions in minutes. By the time a piece of hate content is reviewed and removed, it may have already done its damage.
The real test won’t be what X promises Ofcom, but whether these commitments translate into a platform where targeted communities actually feel safer.


