WITNESS calls on India to develop an innovative, interoperable and effective AI Transparency regulations
On November 6, WITNESS submitted comments to the Ministry of Electronic and Information Technology (MeitY) in response to the Draft IT Amendment Rules “in relation to synthetically generated information”.
WITNESS welcomes the government’s recognition of the growing impact of AI-generated content. However, we caution that the proposed framework remains too broad and platform-centric to achieve genuine transparency or accountability.
This current amendment provides the potential for over-broad content takedowns that could stifle freedom of expression, arts, satire, and journalism. Drawing on nearly a decade of work at the intersection of AI transparency, detection, provenance, and human rights, WITNESS believes India can lead globally by creating a dedicated, risk-based AI Transparency Framework aligned with international best practices.
Effective governance must move beyond simplistic “AI or not-AI” binaries and instead focus on how content is created, disclosed, and used. The goal should be process transparency, not punitive or reactive content moderation.
WITNESS Executive Director Sam Gregory notes:
“We cannot regulate AI by only regulating intermediaries; we must build the infrastructure of trust through transparency, provenance and accountability through every stage of the AI pipeline.”
We also echo the perspective of India’s leading digital rights organization, the Internet Freedom Foundation (IFF), which has voiced deep concerns about the current draft rules. As Apar Gupta, IFF’s Founder and Executive Director, stated:
“IFF strongly recommends that MeitY withdraw the proposed draft amendments on “synthetically generated information” (SGI) due to their sweeping unconstitutional overreach, technical infeasibility, and counterproductive impact on digital rights. A withdrawal does not mean abandoning the field. We urge the government to take a leadership role in crafting a new rights respecting framework that addresses the real and harmful impacts of SGI content that often impact the most vulnerable persons and communities and who lack power in society.“
WITNESS stands in solidarity with Indian civil society as they advocate for rights-respecting AI transparency frameworks. We especially acknowledge the sustained leadership of the IFF and its years of work advancing digital rights in India. At this critical juncture, India has the opportunity to set a global precedent for interoperable, human-rights-based AI governance -one that protects the most vulnerable while fostering accountability, innovation, and trust.
In our submission we offered the following five recommendations and revisions to these rules, while maintaining our view that a different framework would be more effective
- Define synthetic content more precisely.
We recommend narrowing the definition of “synthetically generated content” to focus on intent to deceive and exempt benign, assistive AI uses as well as satire, art and anything protected under international human rights standards of freedom of expression. - Protect lawful expression and prevent overreach in takedowns simply because AI is involved or suspected to be involved.
Deleting Rule 2(ii) to avoid unconstitutional overreach by moderating “synthetic content” under unlawful acts provisions. - Guarantee fairness and transparency in content removal.
Users should receive notice when content is removed, have the right to appeal, and be able to see public transparency reporting about takedowns. Accountability should apply to both removals and errors. - Promote visible but rights-respecting transparency not a “10% Overlay” rule.
Replacing the impractical “10 percent overlay” with less prescriptive, but still clearly visible watermarks, such as icons, or other clear markings that point to latent disclosure such as tamper-evident, rights-respecting provenance metadata built on open standards. - Reject mass automated verification.
AI detection is not an exact science and should not be used within regulations. Evidence from WITNESS’s Deepfake Rapid Response Force and TRIED benchmark shows high error rates in detection tools. India should invest instead in open provenance infrastructure that supports transparency across the entire AI content pipeline.
While these interventions represent the minimum necessary improvements for this current draft, India’s long-term goal should focus on a more comprehensive framework. India now has a pivotal opportunity to shape a rights-based, globally interoperable AI transparency model. One that balances innovation, privacy, and accountability. The long-term goal should be a comprehensive AI Act with a specific Transparency carveout that embeds responsibility across the AI lifecycle, from developers,deployers to intermediaries.