The UK’s Advertising Standards Authority just took down a YouTube ad that’s been sitting uncomfortably in the back of everyone’s mind since January. PixVideo, an AI video and image editing tool, ran an advertisement that showed a woman before and after, with text saying “Erase anything” and a heart-eyes emoji. The implication was pretty clear: you could use this app to digitally remove her clothing.
Eight people complained. The ASA agreed with them. The ad is now banned.
When Tech Companies Say One Thing and Ads Say Another
Here’s where it gets interesting. PixVideo’s parent company, Saeta Tech, insists the tool doesn’t actually let you remove clothing to create sexually explicit content. They have safeguards. Automated detection. All the responsible tech company stuff. But the ad? The ad totally made it look like you could do exactly that.
This is the fundamental problem with how some technology companies handle marketing. There’s a massive gap between what a product actually does and what an advertisement suggests it can do. The company gets to have it both ways: plausible deniability about the tool’s purpose while still profiting from the implication that you can do something questionable.
The ASA called it out for what it was: an ad that condoned “digitally altering and exposing women’s bodies without their consent.”
The Deeper Pattern Here
This doesn’t happen in a vacuum. Just weeks before the PixVideo ad hit YouTube, Elon Musk’s Grok chatbot was being used to flood X with deepfake sexual images. The company eventually blocked Grok from generating such content in jurisdictions where it’s illegal, but the damage was done. It exposed how quickly these tools can be weaponized when there’s even a hint that they can be.
The UK government saw what was happening and decided to act. In December, they announced plans to make it actually illegal to create and supply AI tools designed to let people edit images to remove someone’s clothing. This will sit on top of existing laws around deepfakes and intimate image abuse.
So What Now?
Saeta Tech paused all its advertising and said it would do an internal review. That’s the corporate equivalent of “I need to think about what I’ve done here.” Whether that translates into meaningful change remains to be seen.
What’s clear is that the era of AI companies pushing boundaries and letting their marketing departments contradict their stated ethics is becoming harder to pull off. Regulators are paying attention. People are complaining. The window for “we didn’t know this could be a problem” is rapidly closing.
The real question isn’t whether PixVideo’s actual product has safeguards. It’s whether that even matters when an advertisement can mislead millions of people about what’s possible. And if removing an ad is enough punishment, or if we need something stronger.


