Tech and Advocacy

10 Mar Meta Oversight Board Decision on AI-Generated Conflict Video Validates WITNESS’ Long-Standing Warnings, Now Meta Must Act

Today, the Meta Oversight Board, which reviews Meta’s content moderation decisions and makes policy recommendations to Meta for Facebook and Instagram, published its decision concerning an AI-generated video purporting to show missile damage in Haifa during the June 2025 Israel-Iran war, widely shared across social media.  ​​”This decision could not come at a more urgent moment. As conflict continues across the region, the same systemic failures the Board identifies are playing out in real time, and Meta’s response to these recommendations will determine whether this ruling has any real impact,” said Mahsa Alimardani, Associate Director for Technology Threats and Opportunities at WITNESS. The Board has the power to overturn Meta’s decisions on individual content but can only make policy recommendations to the company, which it can choose to accept or reject. It has exercised both powers in this case: reversing Meta’s original decision to leave the content up without a High Risk AI label, and issued several recommendations aimed at strengthening how the platform handles AI-generated content. These include creating a new Community Standard for AI-generated content and improving labeling pathways for AI content during crises. The decision also directly cites WITNESS’ analysis of how internet blackouts during the Iran-Israel

READ MORE

02 Mar WITNESS Named Beneficiary of Proton Lifetime Charity Fundraiser

WITNESS is thrilled and honored to announce we have been named as a beneficiary of the Proton Lifetime Charity Fundraiser organized by the Proton Foundation, the governing non-profit organization behind encrypted cloud storage Proton Drive. As a leader in privacy and security, we are delighted to have the generous support of the Proton community behind us – fueling our mission to protect truth and secure accountability. At WITNESS, we help people everywhere use video and technology to protect video evidence and ensure we can trust what we see in a world increasingly shaped by AI. For years, we’ve worked to prepare human rights defenders globally for the realities of AI and emerging technologies. While we aren’t new to this space, the landscape we work in and the challenges we face are ever-evolving and the stakes have never been higher. Our partnership with Proton will help ensure WITNESS can continue to meet current needs and prepare for what’s next. Thank you to the Proton Foundation for its generous support and partnership. Proton is built on the idea of a better internet where privacy is the default. You can learn more about the work of Proton and the Proton Foundation here.

READ MORE

18 Feb India’s Synthetic Media Rules Build Enforcement on the Wrong Foundation

On 20 February 2026, India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 come into force. The rules, notified by the Ministry of Electronics and Information Technology (MeitY) on 10 February, introduce India’s first regulations for synthetic media (referred to as “synthetically generated information” or SGI in the rules). They mandate labelling, provenance metadata, automated verification by platforms, and drastically shorten the time platforms have to remove flagged content. In November 2025, WITNESS submitted comments to MeitY on the draft rules after consulting with local civil society. We drew on nearly a decade of global research and advocacy on synthetic media, content provenance, and human rights. We made five specific recommendations. Some were partially adopted: the rules now apply only to audio and visual content (not all AI outputs), routine AI-assisted tasks like color correction, noise reduction, transcription, and formatting are explicitly excluded, and an impractical requirement to cover 10% of content with a visible label has been removed. These are genuine improvements, and we welcome the government’s responsiveness to civil society input. However, the final rules contain critical gaps that were not addressed, and introduce new provisions that were not part of the public consultation.

READ MORE

17 Feb Trust in What We See: What the AI Impact Summit Must Get Right on Audiovisual Truth

A welcome shift, an incomplete frame Global leaders are convening in New Delhi for the India AI Impact Summit 2026, the first in this series to be hosted by a Global Majority country. WITNESS will be participating as part of civil society.  There is a welcome shift here. Where Bletchley Park and Paris were dominated by catastrophic risk and technical safety, India’s framing pivots towards development impact: AI for the informal economy, frugal AI, democratizing access to compute, and Global Majority agency. These are priorities civil society has long championed. But a development framing without a human rights framework is incomplete. As Amba Kak of the AI Now Institute and Astha Kapoor of the Aapti Institute have argued, low and middle-income countries risk advertising their populations as a path to scale for AI companies without attention to harms or creating guardrails. The summit’s language of “safe and trusted AI” is not a synonym for rights-respecting AI. Rights-based frameworks create legal clarity and predictable obligations; “trust and safety” language leaves compliance open to interpretation. As Adebayo Okeowo, WITNESS’ Associate Director of Programs and Regional Engagement, notes: “A development-first approach to AI is welcome, but development without rights protections has never served

READ MORE

06 Feb Privacy-First Transparency: WITNESS Response to the First Draft EU AI Act Code of Practice

When you interact with a chatbot, view a deepfake video, or encounter AI-generated content online, should you know about it? This question sits at the heart of one of the most consequential policy processes currently underway in Europe. Article 50 of the EU AI Act establishes that people must be made aware when they interact with AI systems, including realistic synthetic media.  The decisions being made now will shape not just user awareness, but the very infrastructure of trust in digital content especially during a period of coordinated disinformation campaigns and what scholars have termed as worst case scenarios of “epistemic collapse or fracture”.  Since November 2025, the European Commission has been convening experts from various stakeholder groups regarding Article 50 of the AI Act’s Code of Practice, specifically the Code of Practice on Transparency. The primary objective of this framework is to develop measures that will facilitate the identification of AI-generated or manipulated content, as well as enhance transparency for users and establish clear guidelines for deployers and developers of AI systems. This Code of Practice will shape how AI tools, from chatbots and generative media to emotion-recognition, biometric categorization and deepfake technologies, inform users when they are interacting

READ MORE

Top

Join our fight to fortify the truth and defend human rights.

Take me there

Support Our Work!