X's 24-Hour Content Pledge: Ofcom Gets Tough, But Critics Aren't Convinced

Elon Musk’s X has agreed to review UK reports of suspected illegal hate and terrorist content within 24 hours on average, according to commitments accepted by Ofcom. On the surface, it sounds like progress. In reality, it’s a cautious first step in what activists are calling an uphill battle against online extremism that’s becoming increasingly brazen.

According to BBC reporting, the pledges apply specifically to content flagged through X’s illegal content reporting tool. The company will submit performance data to Ofcom every three months for a year, creating at least a thin layer of accountability. They’re also promising to assess at least 85% of reports within 48 hours, which gives them some wiggle room if they miss the 24-hour average.

But here’s where it gets interesting. Ofcom’s online safety director Oliver Griffiths called the commitments a “step forward,” yet simultaneously revealed something troubling: the regulator has evidence that terrorist content and illegal hate speech is “persisting on some of the largest social media sites.” Translation: we’re only now getting serious about a problem that’s been festering.

The Real Test: Can X Actually Deliver?

The timing of these commitments is hard to ignore. The UK has experienced a disturbing surge in religiously-motivated attacks targeting Jewish communities, including the Heaton Park Synagogue attack in Manchester last October, an assault in Golders Green in April, and recent arson attempts on Jewish sites in London. Ofcom clearly felt pressure to act.

Yet the skepticism from civil rights groups is palpable. Danny Stone, chief executive of the Antisemitism Policy Trust, offered a measured endorsement that was really a warning. “X is failing in so many regards to tackle open racism on its platform,” he said, before adding that the action was a “good start.” The real test, Stone implied, will be whether X actually follows through.

That doubt isn’t unreasonable. Social media platforms have made commitments before. They’ve set targets, submitted reports, and then quietly continued operating as usual when attention moved elsewhere. What’s different here is that Ofcom is establishing a monitoring framework that could theoretically impose consequences.

Two Commitments Worth Watching

Beyond the 24-hour response target, X has made two additional commitments that touch on systemic problems regulators have identified.

The first addresses a frustration that’s been plaguing civil society organizations: they’ve reported “multiple pieces” of suspected illegal hate and terrorist content to X, but had no idea whether their reports were actually received or acted upon. X has now agreed to engage with experts about improving these reporting systems. It sounds administrative, but it’s actually significant. Without clear channels and feedback mechanisms, flagging content becomes an exercise in shouting into the void.

The second commitment is more muscular: X will withhold UK access to accounts reported for posting UK illegal terrorist content if they’re operated by, or on behalf of, a proscribed terrorist organisation. Iman Atta, director of Tell Mama, which records anti-Muslim incidents in the UK, called this particularly encouraging. “This sends an important message that no platform or body operating in this country is above scrutiny,” she said.

Yet Atta also delivered the most important line in this entire story: “The test is not what is promised, but what is delivered.”

The Bigger Picture on Technology

What’s revealing here isn’t just X’s commitments, but what prompted them. Ofcom launched a broader compliance programme in December assessing whether the biggest social media companies have adequate systems for dealing with illegal content. This suggests that the regulator has finally accepted a hard truth: self-regulation isn’t working, and platforms won’t move without government pressure.

The irony, of course, is that X has become something of a lightning rod for content moderation debates. Since Musk’s acquisition, the platform has seen staff reductions and policy changes that critics argue have made it easier for extremist content to spread. Ofcom’s investigation into X’s AI tool Grok, which raised concerns about its ability to generate sexualised images, suggests those worries extend to what the platform’s own technology is enabling.

This gets at a uncomfortable truth about social media: the business model fundamentally rewards engagement, and outrage drives engagement. A 24-hour review window is nice, but it doesn’t address the algorithmic incentives that might be pushing hateful content toward vulnerable audiences in the first place.

What Comes Next?

Ofcom has positioned itself as willing to take on the platforms. That’s necessary. But enforcement will be the real measure. Can they actually impose meaningful penalties if X misses targets? Will they demand algorithmic changes alongside content moderation improvements?

For now, we have commitments and quarterly reporting schedules. That’s something. Whether it’s enough to meaningfully reduce the spread of hate and terrorist content on one of the world’s most influential platforms remains an open question that only time and rigorous monitoring will answer.

Written by

Adam Makins

I’m a published content creator, brand copywriter, photographer, and social media content creator and manager. I help brands connect with their customers by developing engaging content that entertains, educates, and offers value to their audience.