There’s something deeply unsettling about discovering that the glamorous influencer you’ve been following doesn’t actually exist. Worse still when you realize she was designed to exploit racial stereotypes and funnel you toward adult content.
That’s exactly what’s been happening on TikTok and Instagram. The BBC and researchers from Riddance uncovered dozens of accounts featuring highly sexualized AI-generated Black female avatars, none of which were labeled as artificial. TikTok has since banned 20 accounts, but this is just scratching the surface of a much larger problem.
The Deception Machine
These aren’t your run-of-the-mill fake accounts. The avatars are crafted with deliberate racist undertones. They wear skimpy swimwear, sport exaggerated body shapes, and feature artificially manipulated dark skin tones that scream “not real” to anyone paying attention. Yet most people don’t pay attention. Account names drop terms like “ebony,” “noir,” and “dark.” Many include comments like “loves white men” or “why I need a white guy in my life.”
The accounts link to paid-for explicit content on third-party sites. Here’s the kicker: those third-party sites label the imagery as AI-generated. But the Instagram and TikTok accounts themselves? No such transparency. It’s deception by design.
BBC analysts identified 60 accounts engaged in this behavior, mostly on Instagram, with chains of links leading to sexually explicit material. Many more exist without the explicit links, suggesting this trend is far more widespread than initially documented.
Stealing Real Lives
The most brazen part of this story involves real people getting their identities hijacked.
Riya Ulan, a Malaysian model and content creator, discovered that one AI account had stolen her videos wholesale. Her movements, clothing, backdrop, everything. Except the avatar’s face, with its artificially darkened skin tone, was overlaid onto her body. The account amassed three million followers in weeks.
One of these stolen videos reached 35 million views on TikTok and 173 million on Instagram. That’s 47 times more engagement than Riya’s original post ever received.
“I was angry,” Riya told the BBC. “Of course my videos are all out there… It doesn’t mean that you can just take it and steal it and post it as your own.”
What makes this worse is that while the three videos matching Riya’s content were innocent, other videos on the same account using the same digitally created character showed it in revealing clothing or performing provocative actions. A chain of links from that account led straight to adult content. So Riya’s image was being weaponized to promote explicit material she had nothing to do with.
She reported the account multiple times to both platforms. Neither acted until the BBC contacted them for comment.
Why This Is Racist
Let’s be clear about what we’re actually looking at here. This isn’t just a technology problem. It’s a racism problem with a high-tech wrapping.
Angel Nulani, one of the researchers, put it bluntly: “I believe these accounts are racist because their existence perpetuates a long history of the exploitation of black people. Their use of caricatures, race-play terminology and unrealistic depictions of black women prove they’re not concerned with our safety or wellbeing, but our ability to be capitalised as part of the online porn machine.”
AI has given fetishization a new tool. Jeremy Carrasco, who critiques AI trends, notes that “the new thing is the quantity of shameless, racist depictions of extremely black people.” AI makes it easier to manipulate skin tones in ways that look unnatural, to remove undertones, to create effects that would have required animation or skin painting before.
There’s also no social consequence for an avatar. “There’s no shame… that’s something AI uniquely exploits,” Carrasco explains.
Houda Fonone, a Moroccan model and advocate for authentic representations of Black women, framed it as “erasure.” The trend perpetuates a specific aesthetic: silky hair, impossibly thin bodies, flawless skin. “It’s as if black beauty can only be accepted when ‘refined,’” she said. Real stories and real experiences get replaced by artificial images designed to appeal to racist fetishes.
Platform Response and Ongoing Gaps
TikTok moved relatively quickly after BBC inquiries, banning 20 accounts within days. A spokesperson stated the company has “zero tolerance for content which promotes off-platform sexual services” and prohibits AI-generated content of individuals used without permission.
But here’s the thing: TikTok also said it “removed content and banned accounts which breach our rules.” Past tense. As if the problem is solved now. Nine Instagram accounts have disappeared, but Meta’s response was vague. The company said it was “investigating” and that it wants users to know when content is AI-generated, but offered no details about what it actually found or what it plans to do beyond that.
Neither platform explained why Riya’s multiple initial reports went ignored.
The Broader Picture
This situation reveals how business incentives on social media platforms create environments where exploitation thrives. Engagement metrics reward controversial content. Moderation happens only when journalists shine a light on the problem. Platforms respond reactively, not proactively.
Users are getting better at spotting obvious fakes, but AI is making deception increasingly sophisticated. The line between real and artificial blurs by the week. As Riya pointed out, “people keep on falling for these AI models.” And why shouldn’t they? When platforms fail to label content properly and accounts actively deny using AI, trusting your eyes becomes a losing game.
The real question isn’t whether TikTok will ban more accounts or whether Meta will investigate harder. It’s whether these platforms will ever take responsibility for the ecosystems they’ve created, or if they’ll continue treating racism and exploitation as edge cases rather than structural features of how their algorithms work.


