News and Events

02 Mar WITNESS Named Beneficiary of Proton Lifetime Charity Fundraiser

WITNESS is thrilled and honored to announce we have been named as a beneficiary of the Proton Lifetime Charity Fundraiser organized by the Proton Foundation, the governing non-profit organization behind encrypted cloud storage Proton Drive. As a leader in privacy and security, we are delighted to have the generous support of the Proton community behind us – fueling our mission to protect truth and secure accountability. At WITNESS, we help people everywhere use video and technology to protect video evidence and ensure we can trust what we see in a world increasingly shaped by AI. For years, we’ve worked to prepare human rights defenders globally for the realities of AI and emerging technologies. While we aren’t new to this space, the landscape we work in and the challenges we face are ever-evolving and the stakes have never been higher. Our partnership with Proton will help ensure WITNESS can continue to meet current needs and prepare for what’s next. Thank you to the Proton Foundation for its generous support and partnership. Proton is built on the idea of a better internet where privacy is the default. You can learn more about the work of Proton and the Proton Foundation here.

READ MORE

27 Feb WITNESS Submits Public Comment to Meta Oversight Board on AI-Generated Sexual Exploitation

Meta ignored recommendations from its Oversight Board on the last AI non-consensual intimate imagery (NCII) case and continues to fail in addressing the structural issues with technology-facilitated gender-based violence.   WITNESS has submitted a public comment to the Meta Oversight Board on non-consensual AI sexualized impersonation. This case exposes what happens when a platform is told to fix a problem, documents its refusal, and the predicted harm materializes. In 2024, the Oversight Board recommended that Meta overhaul how it handles AI-generated NCII, citing WITNESS twice in its decision following our 2024 submission. The Board told Meta to move its prohibition on sexualized manipulated media into the policy designed for sexual exploitation, to treat AI-generated content as a signal that consent is absent, to update its outdated terminology, and to stop relying on media coverage as a proxy for whether a victim has been harmed. Meta declined the most important of these recommendations and deferred the rest. Its own published response states that it “do[es] not expect to replace ‘derogatory’ with ‘non-consensual’” and “do[es] not expect that this will result in moving the prohibition.” These are not pending changes. They are documented refusals. “This case highlights a structural failure to treat NCII

READ MORE

18 Feb India’s Synthetic Media Rules Build Enforcement on the Wrong Foundation

On 20 February 2026, India’s Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 come into force. The rules, notified by the Ministry of Electronics and Information Technology (MeitY) on 10 February, introduce India’s first regulations for synthetic media (referred to as “synthetically generated information” or SGI in the rules). They mandate labelling, provenance metadata, automated verification by platforms, and drastically shorten the time platforms have to remove flagged content. In November 2025, WITNESS submitted comments to MeitY on the draft rules after consulting with local civil society. We drew on nearly a decade of global research and advocacy on synthetic media, content provenance, and human rights. We made five specific recommendations. Some were partially adopted: the rules now apply only to audio and visual content (not all AI outputs), routine AI-assisted tasks like color correction, noise reduction, transcription, and formatting are explicitly excluded, and an impractical requirement to cover 10% of content with a visible label has been removed. These are genuine improvements, and we welcome the government’s responsiveness to civil society input. However, the final rules contain critical gaps that were not addressed, and introduce new provisions that were not part of the public consultation.

READ MORE

17 Feb Trust in What We See: What the AI Impact Summit Must Get Right on Audiovisual Truth

A welcome shift, an incomplete frame Global leaders are convening in New Delhi for the India AI Impact Summit 2026, the first in this series to be hosted by a Global Majority country. WITNESS will be participating as part of civil society.  There is a welcome shift here. Where Bletchley Park and Paris were dominated by catastrophic risk and technical safety, India’s framing pivots towards development impact: AI for the informal economy, frugal AI, democratizing access to compute, and Global Majority agency. These are priorities civil society has long championed. But a development framing without a human rights framework is incomplete. As Amba Kak of the AI Now Institute and Astha Kapoor of the Aapti Institute have argued, low and middle-income countries risk advertising their populations as a path to scale for AI companies without attention to harms or creating guardrails. The summit’s language of “safe and trusted AI” is not a synonym for rights-respecting AI. Rights-based frameworks create legal clarity and predictable obligations; “trust and safety” language leaves compliance open to interpretation. As Adebayo Okeowo, WITNESS’ Associate Director of Programs and Regional Engagement, notes: “A development-first approach to AI is welcome, but development without rights protections has never served

READ MORE

06 Feb Privacy-First Transparency: WITNESS Response to the First Draft EU AI Act Code of Practice

When you interact with a chatbot, view a deepfake video, or encounter AI-generated content online, should you know about it? This question sits at the heart of one of the most consequential policy processes currently underway in Europe. Article 50 of the EU AI Act establishes that people must be made aware when they interact with AI systems, including realistic synthetic media.  The decisions being made now will shape not just user awareness, but the very infrastructure of trust in digital content especially during a period of coordinated disinformation campaigns and what scholars have termed as worst case scenarios of “epistemic collapse or fracture”.  Since November 2025, the European Commission has been convening experts from various stakeholder groups regarding Article 50 of the AI Act’s Code of Practice, specifically the Code of Practice on Transparency. The primary objective of this framework is to develop measures that will facilitate the identification of AI-generated or manipulated content, as well as enhance transparency for users and establish clear guidelines for deployers and developers of AI systems. This Code of Practice will shape how AI tools, from chatbots and generative media to emotion-recognition, biometric categorization and deepfake technologies, inform users when they are interacting

READ MORE

Top

Join our fight to fortify the truth and defend human rights.

Take me there

Support Our Work!