Blog

Trust in What We See: What the AI Impact Summit Must Get Right on Audiovisual Truth

A welcome shift, an incomplete frame

Global leaders are convening in New Delhi for the India AI Impact Summit 2026, the first in this series to be hosted by a Global Majority country. WITNESS will be participating as part of civil society. 

There is a welcome shift here. Where Bletchley Park and Paris were dominated by catastrophic risk and technical safety, India’s framing pivots towards development impact: AI for the informal economy, frugal AI, democratizing access to compute, and Global Majority agency. These are priorities civil society has long championed. But a development framing without a human rights framework is incomplete. As Amba Kak of the AI Now Institute and Astha Kapoor of the Aapti Institute have argued, low and middle-income countries risk advertising their populations as a path to scale for AI companies without attention to harms or creating guardrails. The summit’s language of “safe and trusted AI” is not a synonym for rights-respecting AI. Rights-based frameworks create legal clarity and predictable obligations; “trust and safety” language leaves compliance open to interpretation.

As Adebayo Okeowo, WITNESS’ Associate Director of Programs and Regional Engagement, notes: “A development-first approach to AI is welcome, but development without rights protections has never served the communities it claims to center.”

For WITNESS, this gap has concrete consequences. Our mandate centres on the integrity of audiovisual evidence, the ability to distinguish AI authentic footage from synthetic content, and the governance frameworks needed to protect both. What follows is where we see the summit falling short and what we are calling for.

Epistemic Fracture

Through our Deepfakes Rapid Response Force, our leadership within C2PA, and our TRIED benchmark for evaluating AI detection tools, we encounter the consequences of synthetic media daily especially through our global and frontline efforts empowering communities to fortify the truth. The deeper damage goes beyond individual fakes. We are facing an increasing epistemic fracture, where doubt is troubling much of what is being seen and believed. Each fabricated clip, each synthetic recording, each scam deploying a manipulated public figure corrodes the baseline assumption that audiovisual content bears some relationship to reality. That corrosion serves those who wish to deny documented abuses and freedom of expression as much as those who fabricate content. This is the liar’s dividend: the political and strategic value of generalised doubt.

In India, this is a documented problem. A joint report by the Internet Freedom Foundation (IFF) and the Center for the Study of Organized Hate (CSOH) details the stark disconnect between India’s AI rhetoric and the ground reality of AI-enabled hate, discrimination, and surveillance. Tavishi Choudhary of the Center for the Study of Organized Hate, writing in Tech Policy Press, documents how generative AI has accelerated the production of dehumanizing content targeting Muslim communities, with the ruling party itself emerging as a contributor. Add to this the crisis of non-consensual AI-generated sexual imagery. In India, X’s Grok was used at scale to generate sexualised images of women and girls, with an analysis finding 6,700 sexually suggestive or nudified images were being created per hour. This is a harm that has intensified with widely accessible nudification tools and that, as we have argued, follows warnings that were systematically ignored. These are not edge cases.

India’s IT Amendment Rules 2026: a live test case for AI governance 

The summit’s credibility on AI governance is inevitably shaped by the host country’s own approach. India’s IT Amendment Rules 2026, notified on 10 February and coming into force on 20 February (ironically during the summit itself) represent the country’s first formal regulatory framework for synthetically generated information. They offer a real-time illustration of whether domestic practice can match multilateral ambition. In a country where the ruling party’s own state units have deployed AI-generated content to target minorities, and where a joint report by IFF and CSOH documents the weaponisation of AI for hate and surveillance, the design of these rules is not a technical question. It is a test of whether ‘safe and trusted AI’ means accountability or control.

WITNESS submitted detailed recommendations to MeitY during the public consultation in November 2025, with close guidance from local peers like the Internet Freedom Foundation. We urged a shift from platform-centric enforcement towards pipeline responsibility, trying to imagine accountability shared across AI developers, deployers, and intermediaries, where open, interoperable provenance standards would sustain rights and transparency.

Some of our recommendations were reflected in the final text: the prescriptive 10 per cent labelling overlay was dropped, and exemptions were introduced for routine editing, translation and accessibility. The rules also introduce India’s first provenance metadata mandate, requiring permanent metadata and unique identifiers for synthetically generated content, though without reference to any open or interoperable standard or privacy guarantees. But the automated verification mandate persists despite the well-documented unreliability of AI detection tools. The compressed takedown timelines (down to three hours without due-process safeguards) risk incentivizing over-removal of legitimate content. And the framework continues to load responsibility onto intermediaries rather than distributing it across the AI pipeline.

IFF has flagged several serious concerns with the final rules. WITNESS echoes many of these and will publish a forthcoming analysis. Effective synthetic media governance requires trust infrastructure, not content control. We hope MeitY will continue engaging with civil society as implementation unfolds.

What WITNESS is watching and advocating for

Detection was initially positioned as a priority for the Indian government’s summit agenda. The Safe and Trusted AI working group is the most directly relevant to our mandate, and we will be tracking whether its outputs address the distinction between content control and process accountability. We are also watching the summit’s engagement with Digital Public Infrastructure (DPI). As Kak and Kapoor note, infrastructure is not neutral: India’s own Aadhaar system demonstrates both the inclusion potential and the privacy risks of state-backed digital systems at scale. Provenance infrastructure for audiovisual content will face the same tensions.

We note the breadth of civil society engagement around the summit. The Internet Freedom Foundation has set out a critical but constructive approach. Access Now and the Global Network Initiative are convening parallel sessions. The Global South Alliance is hosting side events. These spaces often produce the most substantive conversations.

From WITNESS’s vantage point, we call on summit participants to advance the following:

Pipeline responsibility over platform-centric enforcement. Accountability for synthetic media must be distributed across AI developers, deployers and platforms, not loaded onto intermediaries alone.

Interoperable, open provenance standards. Trust in audiovisual content requires infrastructure, not just rules. Standards like C2PA offer a technically grounded path that works across platforms, borders and contexts.

Honesty about the limits of automated detection. Regulatory frameworks that mandate automated verification without acknowledging the unreliability of current tools will produce false confidence and collateral harm.

Transparency as process accountability, not content control. AI transparency should focus on how synthetic content is created, labelled, and distributed, not on expanding capacity to control what people see.

Meaningful civil society participation. Hosting a summit in the Global South is welcome. But symbolism without structural inclusion is insufficient. Frontline organisations must have substantive roles in shaping outcomes.

The AI Impact Summit matters not because it will resolve the crisis of audiovisual trust, but because it signals shared values at a moment when those values are contested. We hope India and the participating states will ground their commitments in enforceable, rights-respecting frameworks rather than aspirational language. The communities on the frontlines of AI’s harms cannot afford anything less.

 

What we are reading this week: 

WITNESS submission to MeitY consultation

IFF/CSOH: AI Governance at the Edge of Democratic Backsliding

IFF analysis of IT Amendment Rules 2026

Kak and Kapoor, Rest of World: “Can India be a ‘third way’ AI alternative?”

Tavishi, Tech Policy Press: “India’s Global AI Pitch Masks a Troubling Reality at Home”

Apar Gupta, Tech Policy Press: “India’s AI Impact Summit Promises Little More Than Spectacle”



Top

Join our fight to fortify the truth and defend human rights.

Take me there

Support Our Work!