Blog

Meta Oversight Board Decision on AI-Generated Conflict Video Validates WITNESS’ Long-Standing Warnings, Now Meta Must Act

Today, the Meta Oversight Board, which reviews Meta’s content moderation decisions and makes policy recommendations to Meta for Facebook and Instagram, published its decision concerning an AI-generated video purporting to show missile damage in Haifa during the June 2025 Israel-Iran war, widely shared across social media. 

​​”This decision could not come at a more urgent moment. As conflict continues across the region, the same systemic failures the Board identifies are playing out in real time, and Meta’s response to these recommendations will determine whether this ruling has any real impact,” said Mahsa Alimardani, Associate Director for Technology Threats and Opportunities at WITNESS.

The Board has the power to overturn Meta’s decisions on individual content but can only make policy recommendations to the company, which it can choose to accept or reject. It has exercised both powers in this case: reversing Meta’s original decision to leave the content up without a High Risk AI label, and issued several recommendations aimed at strengthening how the platform handles AI-generated content. These include creating a new Community Standard for AI-generated content and improving labeling pathways for AI content during crises.

The decision also directly cites WITNESS’ analysis of how internet blackouts during the Iran-Israel conflict created information vacuums that misleading AI-generated media could quickly fill. The Board’s recommendations on content provenance, the implementation of C2PA Content Credentials at scale (a standard for tracking content origin and modifications to help ensure authenticity), and stronger detection infrastructure reflect frameworks WITNESS has been advancing for years, including through our TRIED Benchmark, which evaluates AI detection tools under real-world conditions.

According to Alimardani, the Oversight Board’s decision validates what WITNESS and frontline communities have been documenting: that deceptive AI-generated content in conflict is not an edge case, it is a structural feature of today’s information environment. 

“AI-generated footage is reaching hundreds of millions of views, while simultaneously authentic documentation of civilian casualties is falsely dismissed as fabricated. Both directions of this problem – fake content accepted as real, and real content rejected as fake – are symptoms of the same failure. The Board has correctly identified that provenance infrastructure, consistent labeling, and detection investment are not optional enhancements: they are baseline obligations,” said Alimardani.  

Meta now has 60 days to respond to the Board’s recommendations. Based on Meta’s track record – including its failure to meaningfully implement recommendations from the Board’s 2024 case on non-consensual AI-generated sexual imagery, strong decisions do not automatically translate into action. Implementation is the test that matters, and WITNESS will be monitoring closely. 

Context 

WITNESS submitted expert comment  to Meta Oversight Board in December 2025, drawing on global research, frontline consultations, and real-time case analysis through the Deepfake Rapid Response Force, an initiative to quickly verify suspected deepfake or manipulated content.

The submission documented how AI-generated content functioned as a structural feature of the conflict information environment – overwhelming fact-checkers, exploiting internet blackouts, and accelerating what we have described as the liar’s dividend: the erosion of trust in all visual evidence, both synthetic and authentic.

This decision matters beyond the single case. As the current conflict continues – and as we document fresh cases of both AI-generated disinformation and the false dismissal of authentic civilian casualty footage – the systemic failures the Board identifies are not historical. They are ongoing. The same dynamics that allowed a deceptive video to reach 700,000 views without a label in June 2025 are operating today.

We call on Meta to implement these recommendations in full and with urgency – not as a future roadmap, but as an immediate operational commitment proportionate to the scale of harm already documented.



Top

Join our fight to fortify the truth and defend human rights.

Take me there

Support Our Work!