WITNESS Submits Expert Comment to Meta Oversight Board on AI-Generated Video in the Israel–Iran Conflict
On 2 December 2025, WITNESS submitted a public comment to Meta’s Oversight Board in response to a post about an AI-generated video circulating that falsely showed destruction in Haifa during the June 2025 Israel–Iran war.
WITNESS urged Meta’s independent Oversight Board, which reviews content decisions on the platform, to go beyond incremental fixes. The organization recommended urgent investment in robust provenance infrastructure, advanced AI detection systems, clear and contextualized labeling, strengthened fact-checking, user controls, likeness protection policies, and a governance framework grounded in human rights and global equity.
According to Mahsa Alimardani, Associate Director of Technology, Threats and Opportunities at WITNESS, the case raises urgent questions about how platforms identify, label, and respond to synthetic media in fast-moving conflict or high-risk situations, where misleading visuals can spread more quickly than verification or platform action.
“Highly realistic AI-generated content is now shaping public understanding of events before facts can be established. We encourage the Oversight Board to tackle this challenge head on and demand Meta to cultivate sustained investment in transparency infrastructure, detection systems that work in real-world conditions, and platform responses that help users understand how content was created without undermining trust in authentic evidence,” Alimardani said.
The submission also highlights how AI-generated content has become a structural feature of platforms, often undermining trust in visual evidence, overwhelming fact checkers, and exposing the limits of current detection, labeling, and provenance systems. WITNESS calls for urgent implementation of rights-respecting, evidence-based frameworks for the governance of synthetic media, as the scale, realism, and accessibility of generative AI continue to accelerate.
Video and technology to defend human rights
WITNESS’s recommendations are also grounded in its participation in global technical and policy processes, including our leadership in promoting human rights standards within the Coalition for Content Provenance and Authenticity (C2PA), contributions to AI transparency and provenance standards, and advisory engagement on AI regulation and codes of practice across multiple jurisdictions.
The submission draws on more than three decades of experience supporting people who use video to defend human rights, and nearly a decade of focused work on generative AI, synthetic media, provenance, detection, and platform governance. It is informed by WITNESS’s real-time analysis through the Deepfakes Rapid Response Force (DRRF), which has led to the development of the TRIED Benchmark for evaluating AI detection tools under real-world conditions, as well as contributions to shared evaluation datasets for AI detection systems such as the MNW benchmark.