Elon Musk’s X has found itself under the regulatory spotlight again, this time committing to a set of promises aimed at tackling illegal hate and terrorist content on the platform. According to BBC reporting, the company has pledged to review UK reports of suspected illegal hate and terror material within 24 hours on average, a commitment accepted by Ofcom, the UK’s media regulator.
It sounds straightforward enough. Flag something, get a response within a day. But the devil, as always, is in the details.
The Numbers Behind the Promise
X has committed to average less than 24 hours for its reviews, while also promising to assess at least 85% of reports within 48 hours. The platform will submit performance data to Ofcom every three months for a year, allowing the regulator to monitor whether these targets are actually being met. It’s the kind of measurable commitment that regulators love because it creates a paper trail.
Oliver Griffiths, Ofcom’s online safety director, called the commitments a “step forward,” particularly in light of recent religiously-motivated crimes targeting Jewish communities in the UK. The timing matters here. The UK has experienced a disturbing series of attacks, including the Heaton Park Synagogue attack in Manchester in October 2025, an attack in Golders Green in April, and recent arson attempts on Jewish sites in London. This isn’t abstract policy talk; it’s responding to real violence.
But Is It Enough?
Here’s where enthusiasm hits a wall. Danny Stone, chief executive of the Antisemitism Policy Trust, called the action a “good start” but was notably cautious about declaring victory. “X is failing in so many regards to tackle open racism on its platform,” he said, adding that Ofcom would need to actually hold the company accountable for what it promised.
That last part is crucial. Promises are easy. Execution is harder, especially when a platform has faced consistent criticism for inadequate moderation.
The underlying issue isn’t just speed of response. According to BBC reporting, some organisations had flagged multiple pieces of suspected illegal hate and terrorist content to X but were unclear whether their reports had even been received or acted upon. So X’s second commitment addressing this makes sense: the company will now engage with experts about reporting systems and improve transparency around whether flagged content was received and what happened next.
The Broader Context
This enforcement action is part of a larger compliance programme Ofcom launched in December, assessing whether major social media companies have adequate systems for handling illegal content reports. Griffiths noted that Ofcom had evidence of terrorist content and illegal hate speech “persisting on some of the largest social media sites,” signalling that X isn’t alone in this problem, though it may be one of the most visible cases.
X also committed to withholding UK access to accounts reported for posting UK illegal terrorist content, provided the company determines they’re operated by or on behalf of a terrorist organisation proscribed in the UK. Iman Atta, director of Tell Mama, which records anti-Muslim incidents in the UK, welcomed this commitment, saying it sent “an important message that no platform or body operating in this country is above scrutiny.”
But she also delivered a pointed reality check: “The test is not what is promised, but what is delivered.”
The Bigger Picture
It’s worth stepping back and noting the broader landscape here. According to the source material, police in the UK recorded more than 1,500 racist hate crimes and 2,367 race-related incidents in the past year. Meanwhile, some of the most visible UK anti-immigration social media accounts have been traced back to Sri Lanka and Vietnam, suggesting that online hate isn’t even purely domestic in origin.
X also faces a separate ongoing Ofcom investigation into its AI tool Grok, over concerns it was being used to create sexualised images. So even as the platform commits to faster content moderation, questions linger about other features and their safety implications.
The real test won’t be whether X can hit a 24-hour target on paper. It’ll be whether these commitments translate into a meaningfully safer platform for UK users, or whether we’re just watching a tech company go through the regulatory motions until attention shifts elsewhere.


