WITNESS Submits Public Comment to Meta Oversight Board on AI-Generated Sexual Exploitation
Meta ignored recommendations from its Oversight Board on the last AI non-consensual intimate imagery (NCII) case and continues to fail in addressing the structural issues with technology-facilitated gender-based violence.
WITNESS has submitted a public comment to the Meta Oversight Board on non-consensual AI sexualized impersonation. This case exposes what happens when a platform is told to fix a problem, documents its refusal, and the predicted harm materializes.
In 2024, the Oversight Board recommended that Meta overhaul how it handles AI-generated NCII, citing WITNESS twice in its decision following our 2024 submission. The Board told Meta to move its prohibition on sexualized manipulated media into the policy designed for sexual exploitation, to treat AI-generated content as a signal that consent is absent, to update its outdated terminology, and to stop relying on media coverage as a proxy for whether a victim has been harmed. Meta declined the most important of these recommendations and deferred the rest. Its own published response states that it “do[es] not expect to replace ‘derogatory’ with ‘non-consensual’” and “do[es] not expect that this will result in moving the prohibition.” These are not pending changes. They are documented refusals.
“This case highlights a structural failure to treat NCII as sexual abuse material,” says Bruna Martins dos Santos, the Technology Threats and Opportunities Advocacy and Policy Manager at WITNESS.
A non-public figure, the population the Board explicitly warned was most at risk, had AI-generated sexualized content assessed under Meta’s Adult Nudity and Sexual Activity policy, which contains no framework for consent. The content was not assessed under the Adult Sexual Exploitation standard that the Board recommended, nor even under the Bullying and Harassment standard, where Meta itself says the prohibition belongs. Every failure the Board identified in 2024 recurs here.
The crisis is accelerating.
“AI-generated NCII has moved into mainstream distribution ecosystems, where the shift is not just in volume but in ease, plausibility, and reach,’ says shirin anlen, AI Research Technologist and Impact Manager at WITNESS, “but it is also reflected in structural features of generative systems, advertising infrastructure, and platform enforcement design. Abuse has become industrialized with consumer-facing nudification and deepfakes tools embedded in the commercial ecosystem and promoted through advertising systems.”
As WITNESS documented in its March 2025 submission to the UN Human Rights Council, this form targets everyday people, disproportionately women and girls. At the same time, increasingly realistic generative systems enable high-fidelity likeness exploitation without consent. A recent Oxford Internet Institute study found that 96% of publicly downloadable deepfake models target identifiable women. Growing realism fuels plausible deniability, while platforms fail to label AI-generated content 70% of the time, including content produced with their own tools, undermining enforcement frameworks that rely on proving manipulation. The January 2026 Grok disaster demonstrated that harm occurs at the moment a system generates non-consensual sexualized depictions, not only when that content is shared, making this not just a moderation issue but also a product design and safeguards issue. AI-generated NCII is a form of technology-facilitated gender-based violence that demands a policy response grounded in consent and sexual exploitation frameworks, not content moderation half-measures.
Finally, as smart glasses shipments grew 210% in 2024, AI-enabled wearable devices are creating a seamless capture-to-exploitation pipeline. In February 2026, a self-proclaimed pick-up artist used Meta’s Ray-Ban smart glasses to covertly record women across Kenya and Ghana, monetizing the footage through TikTok, YouTube, and a paid Telegram channel. Kenya’s Ministry of Gender, Culture, and Children Services called it “a serious form of technology-facilitated gender-based violence and exploitation.” Meta is not only the platform where non-consensual intimate content circulates. It is also the manufacturer of hardware that enables covert likeness capture.
WITNESS agrees with and urges the Board to reiterate its 2024 recommendations, which remain unimplemented. We consider the need to change the framing of Synthetic NCII, the lack of consent in the generation of these contents, and call to strengthen transparency, still timely and needed regarding many of the points highlighted in our submission.
Besides agreeing with the previous recommendations, we also suggested the following to the OSB:
- Recognize AI-generated NCII as technology-facilitated gender-based violence, aligning policy language accordingly.
- Adopt a presumption of non-consent for realistic AI-generated sexualized impersonations of identifiable individuals, regardless of the fact that such images might have been generated based on media made available by the individuals themselves, as it implies a manipulation of the purposes regarding the image uses.
- Mandate human review for all AI sexual impersonation reports.
- Create a dedicated reporting pathway for AI-generated sexualized impersonation.
- Implement product-level safeguards to prevent the generation of sexualized impersonations at the system design stage.
- Extend product accountability to AI-enabled capture devices. Meta should establish responsible design standards for its hardware products to prevent covert likeness capture from feeding into exploitation pipelines.
WITNESS looks forward to engaging with the OSB and Meta on the next stages of this case, and we hope that the platform takes into account the previous and incoming OSB recommendations on how to address non-consensual AI sexualized impersonation.