X Is Drowning in Fake Iran Attack Videos and Nobody's Stopping It

When major geopolitical events happen, X becomes a digital wild west. Within minutes of the US and Israeli governments announcing a “major combat operation” against Iran early Saturday morning, the platform was already flooded with misleading content, fabricated videos, and outright lies about what was actually happening on the ground.

WIRED reviewed hundreds of posts making false claims about the attack locations and scale. The results are alarming. Old footage from months or even years ago is being recycled as current combat footage. Videos are being attributed to completely wrong locations. AI-generated images are spreading unchecked. Video game clips are being passed off as real warfare.

And here’s the kicker: most of this viral disinformation is coming from accounts with blue check marks, meaning these users pay for X’s premium service and can earn money based on engagement. They’re literally getting paid to spread lies.

When Blue Checks Mean Nothing

You’d think a blue check mark on X would mean something. Once upon a time, verification meant an account was legitimate and trustworthy. Now it just means someone has a credit card and access to X’s premium tier. It’s become a badge of monetization rather than credibility.

One blue-checked account posted a video claiming to show ballistic missiles over Dubai. The video actually showed Iranian missiles fired at Tel Aviv back in October 2024. It got 4.4 million views. Another viral clip supposedly showing an Israeli fighter jet being shot down has been shared dozens of times and viewed over 3.5 million times, despite there being zero credible reports of any Israeli jets actually being shot down.

The problem isn’t that false information exists. It never has been. The problem is that on X, under Elon Musk’s leadership, there’s virtually no friction to stop it from spreading at scale. Community notes exist as a band-aid solution, but they appear after millions of people have already consumed the false narrative. It’s like locking the barn door after the horses have escaped into three different counties.

The AI-Generated Smokesceeen

Tehran Times, the Iranian government-aligned news outlet, posted what appears to be an AI-generated image claiming an American radar in Qatar was destroyed in an Iranian drone strike. The image was AI-generated. The claim hasn’t been verified. But it was still posted from an official news account and spread across the platform.

This is where technology becomes genuinely dangerous. AI image generation has become sophisticated enough that average users can’t easily spot what’s fake. A government-aligned outlet using synthetic imagery to make false military claims isn’t just misinformation anymore, it’s psychological warfare being weaponized through a social media platform that’s effectively abandoned content moderation.

Pro-Iranian accounts have been using actual footage from Saturday’s attacks to falsify strike locations. The Iran Observer account posted an image of Dubai while claiming it showed a strike in Tel Aviv. It got 200,000 views before deletion, but by then dozens of other accounts had already shared the same image with identical false captions.

Nobody’s In Charge

X hasn’t responded to requests for comment. Of course it hasn’t. There’s no real content moderation infrastructure anymore. During the Israel-Hamas war and more recently during the anti-immigration enforcement protests in LA, the platform became completely inundated with inaccurate posts. The pattern repeats. The problem persists. Nothing changes.

A pro-Trump account with a blue check posted images claiming to show the palace of Iranian Supreme Leader Ali Khamenei before and after attack, with 365,000 views. The “before” picture isn’t actually the palace at all, it’s the Mausoleum of Ruhollah Khomeini on the other side of Tehran. But why let accuracy get in the way of engagement metrics?

This isn’t about one political side or another. It’s about a platform that has systematically dismantled its ability to verify information, moderate content, or slow the spread of misinformation during moments when accurate information actually matters. When geopolitical tensions are running high and military operations are happening in real time, the stakes of false information aren’t just about engagement numbers or algorithmic reach anymore.

The real question nobody seems willing to ask is whether a social media platform primarily designed to maximize engagement through controversy should be the main source where billions of people get their breaking news about international conflicts. X used to be Twitter, a platform with actual editorial standards. Now it’s become something else entirely: a monetized disinformation machine that rewards whoever can generate the most viral false content, regardless of the real-world consequences.

Written by

Adam Makins

I can and will deliver great results with a process that’s timely, collaborative and at a great value for my clients.