On the afternoon of February 2nd, I received an email from a student researcher named Angel Nulani. She had found 40 social media accounts with AI-generated black women that perpetuated racist, anti-black behavior. Coincidentally, earlier that day, I had received an email from Sharihan Al-Akhras from BBC News Arabic. She was working on a piece about an impossibly-black AI-generated character that stole real videos and “reskinned” them with AI.
Together, Riddance embarked on an investigation in collaboration with the BBC that uncovered over 100 social media accounts that depict the likenesses of Black women through AI-generated characters.
Research revealed accounts running out of 34 countries across North and South America, Europe, Africa, and Asia. They tally a cumulative 19.4 million followers across Instagram and TikTok. The majority (68 accounts) actively promote monetization on sites where they sell explicit AI-generated media.
AI-generated videos are especially adept in race-to-the-bottom trends because creators can avoid accountability. They push limits to get attention. But by depicting black women, they perpetuate a long history of exploiting black people, using caricatures and self-hating characters to attract morbidly curious or racist audiences.
By identifying and examining these accounts, we can start to find solutions for this quickly-growing problem. With AI media improving rapidly, solutions must keep up, otherwise this AI-perpetuated exploitation will only get worse.
How AI advancements scaled digital blackface
Our sample of 100 accounts is representative of both big and small accounts. Of the 100 accounts, 63 were exclusive to Instagram, 4 were exclusive to TikTok, and 33 had pages on both. This means that tracked 136 pages total.
Most of the Instagram accounts were created between mid-2025 and early 2026. A third were created in the past three months alone, and 60 changed their usernames at least once, meaning they “pivoted” to AI-generated black women. The trend is clearly demonstrated in the graph below, measuring when the Instagram accounts were created and if they still have their original usernames.
Google released the Veo 3 AI video model in May, and companies like Kling and Runway unveiled competing video models shortly after. AI-generated videos flooded Instagram and TikTok at this time, as creators experimented with AI slop but lacked clear monetization plans. Once their accounts grew, they looked towards profitable AI niches.
All 100 accounts had to satisfy three criteria:
They use entirely AI-generated media
They depict black women (including numerous albino and vitiligo accounts)
They are sexual in nature
The explosion of accounts from late 2025 through today directly correlates with the releases of open-source AI generation models: FLUX-2, an AI image model launched in November, and Alibaba’s Wan 2.2 Animate Replace, an AI video model released in September. FLUX-2 can generate sexual images, and Wan 2.2 can “animate” them. This combination lets AI creators generate realistic, consistent AI videos without guardrails.
How the accounts depict black women
Many of the AI-generated characters demonstrate explicit, internalized anti-blackness. They may state a desire to be European (particularly Slavic) or ask if they’re “good enough” for white attention. Women with vitiligo and albinism are characterized as having one white parent, outright calling themselves white or half-white. At least three accounts have made references to the n-word, and one gives its followers an “n-word pass.”
Women are often depicted kissing famous white men like Donald Trump, John Cena, or Johnny Sins (a white male porn actor). Some accounts post about Jeffrey Epstein, with one character attempting to defend him. (35) of the accounts include race-play terminology in their usernames, like “black”, “white”, “vanilla”, “charcoal”, “dark”, “ebony”, or “noir”.
Many accounts fetishize aspects of Black culture and the Black diaspora while having no understanding of them. Black American naming conventions are used for African characters. African accounts often have no specified ethnicity. And when they try to give characters a cultural background, they get basic information wrong. For example, Carolamater, an account based in Lithuania, depicts a Mursi woman with light skin and box braids. The real Mursi people, from southern Ethiopia, are known for their distinctive lip plates and body painting.
And then, there are the characters who have impossibly-black skin.
Not normal
What does it mean to have “impossibly-black” skin?
Part of my job is to perfectly dial-in accurate skin tones on camera. This helps me identify exactly what’s wrong with these characters. Any real human, light or dark-skinned, has skin tones falling on a vector of various red-orange hues. This is clearly demonstrated by a vectorscope, which analyzes color and saturation.
Below, I’ve highlighted four dots representing different skin tones. These all cluster on the top-left vector around what is known as the “skin tone line”, which slopes up and to the left.
Impossibly-black characters, on the other hand, land almost directly in the middle of the vectorscope, which means there is no color saturation in their skin. Notably, this is a separate measurement from the amount of light reflected off the skin, which can also vary. The resulting characters display a range of dark grey to pitch-black skin.
Historically racist portrayals of Black people, like caricatures or minstrel shows, often painted their faces literally black. And if it’s not obvious, AI-generated characters will have normal skin tones by default. Human creators have to deliberately try to make impossibly-black characters. In testing, we have confirmed that prompting FLUX-2 for “very dark, pitch black skin with no undertones” will yield impossibly-black skin.
Nianoir is a character with impossibly-black skin. It had the most followers of all 100 accounts, with 3.1 million on TikTok alone — 2.7 million more than the next largest TikTok account. The account was removed from TikTok after our request for comment, but remains up on Instagram (we’ll dive into the platform responses later).
Their bio says “Just a girl with dark side... 🖤” [Grammar incorrect]. The Ukraine-based account makes many of their videos by stealing content from real creators, then replacing them with the Nianoir character. The AI video model Wan 2.2 Animate Replace does this by design.
The BBC interviewed Riya Ulan, a young model from Malaysia, who had her own videos stolen by Nianoir. Ms. Ulan was stunned and tried to take action, but was unable to get the stolen content taken down until the BBC reached out.
The sad irony is that Nianoir’s comment sections are full of people posting screenshots from the videos Nianoir stole from — often from white or Asian creators like Ms. Ulan — which are met with accusations that these creators are the ones whitewashing.
Organized, international operations
AI-generated media is about quantity before quality, so operators often run multiple accounts. Sometimes, this turns into full operations with teams of people working in content farms.
One operation, based in the United Kingdom, runs ten accounts, including four of the top ten in total followers. Their accounts totaled almost 3.5 million followers, though there’s significant follower overlap. These accounts follow one another, “collaborate” between characters, and some use the exact same character.
Their accounts include zurilovesvanilla, the fourth-most-followed account of all 100, and ayannasoblack, an impossibly-black character. Naledi_the_white_girl is an albino black character who once posted that she “still commits 50% of the crime.”
Many accounts had at least one former username, showing this group was making other AI content before pivoting to Black characters. Eighteen username changes before landing on “zurilovesvanilla” tells you this wasn’t their first idea, but it was their most profitable one.
How do they make money? The majority of accounts (68 of the 100) promote monetization links to sites where they sell AI porn. You can assume that the other 32 are waiting to get big enough to monetize. None of the content on social media is explicit, so the objective is to capture an audience on an social media, then filter the “whales” — a term from betting meaning high-spending individuals — to the monetized sites.
A group based in Kosovo demonstrates the recency of the trend. They run seven accounts and focus on dark-skinned, curvy avatars, with a couple (shaykelt and nyxkash) having the same exact avatar. Their accounts combined to a total of 680,000 followers. Every account was created in the past two months and has no username changes. This group identified the niche and went all-in right away.
What platforms can do
I wasn’t expecting Instagram or TikTok to proactively remove many of these accounts, and for the most part, I was right. Over three weeks of collecting and tracking accounts, a few were taken down. But this was insignificant compared to the new accounts that popped up every day.
Then, Riddance and the BBC independently reached out to TikTok and Instagram for comment, and provided both with lists of accounts.
Within three days, TikTok banned 20 of 37 accounts, including removing Nianoir.xo, the single largest account on either platform. Several accounts belonging to the UK group (onlyskyewhite, zurilovesvanilla, emma.cynder) and the Kosovo group (nyxkash, abbyblacki9) were taken offline entirely. TikTok also applied AI-generated labels to the videos we shared. These were significant steps, taken quickly.
Instagram took down just 2 accounts: ayannasoblack and naledi_the_white_girl, both belonging to the UK content farm. Nineteen other accounts were age-gated, meaning they went from fully public to requiring digital ID to view. The remaining 59 accounts are still fully public and unchanged.
When asking TikTok which policy those 18 deleted accounts had violated, they cited their policy that prohibits “pretending to be a fake person or organization with the goal of misleading people.”
Instagram’s version of this policy, “Authentic Identity Representation,” prohibits accounts “created or used to deceive others.” But in practice, the policy targets accounts that impersonate real people, or networks engaged in coordinated manipulation. It doesn’t appear to be designed for scenarios like this, when someone creates an entirely fictional persona.
95 of the 100 accounts did not disclose they are AI-generated anywhere on social media. None of them labeled individual posts as AI-generated, which both TikTok and Instagram require. In theory, when either detect a C2PA “watermark” that indicates a video is AI-generated, they can label it as such. But the open-source AI models powering this surge, FLUX-2 and Wan 2.2, don’t embed C2PA metadata.
TikTok stated that over 1.3 billion videos have been labeled as AI-generated to date. They also stated that 98% of the videos TikTok removed for violating edited media and AIGC policies, and 90% that violated sexual activity and services policies, were removed proactively in the most recent reporting period. Instagram declined to comment.
Instagram’s hateful conduct policy explicitly bans harmful stereotypes “historically linked to intimidation or violence, such as Blackface.” The impossibly-black characters documented in this article — characters with no undertones, no color saturation, skin that reads as literal black — are digital Blackface. In my opinion, Instagram’s written policy should cover them.
What’s missing?
Platforms should directly address AI-generated impersonation of marginalized groups. A user who encounters an impossibly-black AI character posting watermelon memes has to decide whether to report it as “hate speech” (which doesn’t capture the AI deception), “spam” (which misses the harm), or “AI-generated content” (which also misses the harm). The reporting categories don’t reflect the problem.
As agentic AI content pipelines become more common, we’ll need automated and human interventions. Automated systems don’t have the depth of social awareness needed to catch what’s happening here. So while platforms will naturally focus on automated, software-based solutions, they should work with third parties with human investigators to help out.
If you come across harmful content, report it for hate speech or sexual exploitation so platforms have that data. It might feel like the reporting goes unnoticed, but in aggregate it may help.
But while AI detection is still possible for some people, the quality improvements of AI media will outpace detection in the long run. The accounts in this investigation are already more convincing than they were six months ago. Platforms must act now by banning accounts and finding algorithmic patterns outside of the videos themselves.
“We’re so beautiful”
Angel Nulani, the student researcher who brought many of these accounts to our attention, did hours of research for this project. As a Black woman herself, this investigation was emotionally taxing. We asked her to be an editor and contributor to the piece. Angel is not a journalist or investigator (yet), but a student with an incredible drive and attention to detail. We assumed she was already a working professional.
As the investigation neared its conclusion, we asked her about how the content and investigation had affected her. We decided to include her response in full.
What upsets me is not that these characters are self-hating, but that there is no “self.” For the majority of the people who are behind these accounts, the only Black people they know are the women they generated. They were not born Black. They chose to be Black, and yet they spend so much time distancing themselves from it.
An ‘all lives matter’ shirt can be paired with booty shorts; crime statistics can be overlaid on thirst traps; videos crying over their features are spliced between suggestive yoga poses.
It’s not enough to pander to people that say “ebony” more than “African” because the appeal is not simply a Black woman desiring white men. The appeal is in a Black woman desiring a complete absorption into whiteness. The men they attract are not suitors, they are saviors.
I’ve tried so hard to disassociate. As a Black woman that’s been online since I was young, I assumed I saw it all. Nothing could really hurt me. At first, all I wanted was to be objective, and rational, and done with it. But this cuts me down to my core.
Ironically, that’s even more confirmation they could never be us, because when you’re real there’s no separating yourself from this. The watermelon emojis are par for the course for them; they’re not for me.
I’m terrified to think there’s a little Black girl somewhere feeling as terrible as I do right now because some guy in Malta wanted €3 a month.
They make us seem so ugly, but we’re beautiful. We’re so beautiful.
































