Blog

European Parliament

Navigating Human Rights in the EU AI Act: WITNESS’s Call for Thoughtful Transparency

In September, the European Commission began implementing Article 50 of the EU AI Act, the EU’s first comprehensive law regulating artificial intelligence, by launching a public consultation to draft guidelines and a Code of Practice (CoP) on AI transparency. The outcome will shape how AI tools, from chatbots and generative media to emotion-recognition, biometric categorization and deepfake technologies, inform users when they are interacting with or viewing AI-generated content.

In its official submission, WITNESS called on the Commission to ensure that the forthcoming Transparency CoP reflects the complex, multimodal nature of generative AI and its impact on accessibility, privacy, and the potential for misuse by governments.

For more than two decades, WITNESS has helped communities use video and technology to defend human rights. Over the past eight years, the organization has observed how artificial intelligence can both empower truth and amplify disinformation. WITNESS works to ensure that policies for transparency and disclosure around real and synthetic content are grounded in human rights and respond to the needs of critical frontline information actors like journalists and human rights defenders.

Since 2020, WITNESS has also been actively involved in Coalition for Content Provenance and Authenticity (C2PA), shaping authenticity and provenance infrastructure so that it protects privacy, and serves human-rights defenders and journalists rather than only corporate or state interests.

As generative AI becomes part of daily communication, WITNESS calls attention to the importance of strengthening disclosure, provenance, and labeling approaches that foster information integrity, trust and informed digital participation. These mechanisms can help people navigate an increasingly complex information environment, combat deception and address harms while safeguarding speech rights and enabling innovation.

Why thoughtful and comprehensive transparency matters

“As hyper-realistic AI content becomes widespread, we need transparency standards now—not just to distinguish reality from fabrication, but to preserve space for creativity, allegory, and satire. The stakes go beyond spotting individual deepfakes. Without the comprehensive infrastructure and tools to navigate our information environment with nuance, disclosing what is authentic and what is synthetic clearly, we risk undermining broader trust in visual truth itself.”
Bruna Martins dos Santos, WITNESS’ Policy and Advocacy Manager

We believe that transparency should aim to build public trust without fragmenting the information landscape or placing disproportionate responsibilities on specific sectors such as the judiciary, media, or civil society. It must aim for a cohesive environment where open standards strengthen understanding rather than creating confusion or inequity. Ensuring that transparency obligations are fairly distributed and involve shared obligations across the ecosystem helps maintain balance and encourages cooperation among all stakeholders involved in the AI ecosystem.

At the same time, transparency requirements should foster accessibility and innovation, supporting an interoperable and dynamic ecosystem for providers and deployers both within and beyond the EU. Standards must work seamlessly across different contexts—spanning diverse media, communication channels, and technological environments—while allowing flexibility to tailor information to varying audiences.

Crucially, these measures should be privacy-preserving by design, never compromising anonymity or pseudonymity. The emphasis should remain on clarifying how AI systems operate—through their agents, content, and interactions—rather than exposing who is behind them. Finally, the frameworks must remain adaptable, capable of evolving alongside emerging AI scenarios, including pervasive and agentic applications as well as hybrid human-AI collaborations.

Article 50 and the Transparency Code of Practice

As part of its official submission to the European Commission’s consultation on Article 50, WITNESS outlined key recommendations to ensure that transparency requirements protect human rights, foster accountability, and remain effective in the real-world context of complex, multimodal AI systems:

Move beyond false binaries

For transparency to remain effective under Article 50 and the incoming Transparency CoP, it must resist false binaries of AI and reflect the real-world complexity of AI use, including remixing, re-use, and combined human and AI inputs. From creative remixes to deceptive deepfakes and personalised feeds like Sora2 or Vibes, AI constantly reshapes how information appears and spreads. In this environment, context -and the risk of context collapse- matters more than ever.

The CoP must clarify the definition of “deepfake”. Currently the term encompasses a growing range of AI and AI-hybrid content. For the incoming approaches to be meaningful, signals need to be preserved and iterative across the content lifecycle, avoiding a simple “made by AI” or “made by human” binary and reflecting how content is not static, and the applicability of the Transparency CoP to “AI-slop” and personalized content also needs to be addressed.

Protect privacy and prevent misuse

The CoP must ensure transparency does not come at the expense of privacy, anonymity, and pseudonymity. Personally-identifiable information should not be a prerequisite for verifying whether content is AI-generated or manipulated. The focus should remain on the “how” of AI usage (agent and content) and AI-human interaction, not the “who” – and potential implications to privacy.

In this sense, we consider that integrating fundamental rights considerations is key to ensuring implementations are privacy-protecting, accessible, and not prone to government weaponization. Without this foundation, disclosure tools intended to promote trust could instead expose journalists, activists or creators to new risks, or be used to censor or surveil under the guise of combating misinformation. Ultimately, even the most “effective, interoperable, robust and reliable” labelling systems will fail if they ignore the human rights of the people they aim to protect. Technical design must serve those rights from the outset.

Reinforcing trust without fragmentation or weaponization

Measures should build trust without creating new divides or increasing burdens on critical sectors – such as the media or courts. Authentic material should never be dismissed or devalued simply because it lacks a label or provenance. The CoP should engage with courtrooms, judicial systems, and media outlets in the drafting process to map collateral implications.

Secondly, we call attention to the significant risks of “weaponization of truth” in both EU and global jurisdictions. Governments have used fake news laws, surveillance requirements, or mandatory data retention obligations to suppress dissent under the banner of fighting disinformation. The EU’s transparency framework must prevent such misuse, ensuring that transparency empowers people to hold power accountable — not the other way around.

Interoperability and real world use

Transparency standards will only work if they are interoperable and accessible across media, platforms and jurisdictions – within and outside the EU. Additionally, transparency standards should be interoperable across media formats and communication spaces – flexible enough to deliver useful, accessible information to different kinds of users.

A key concern is ensuring that the ‘recipe’ of AI use (information about how AI or editing contributed to a piece of content) stays attached throughout its life cycle, including creation, editing, remix and reuse. This requires strong interoperability and a commitment to preserve this information that applies to providers, deployers and potentially other actors in order to be effective. Currently, there are significant challenges in interoperability and preservation of critical signals across the information ecosystem. Addressing these gaps is essential for transparency and disclosure to work effectively. Additionally, EUstandards need to be interoperable and compliant with existing global efforts like C2PA andW3C, to guarantee that provenance and labelling systems function across different platforms and jurisdictions.

On this note, we hope that the Commission continues to prioritize the development of harmonized standards under the AI Act, including clear enforcement in open source AI Systems.

However, many questions remain. How will the CoP solve the overlapping obligations between deployers and providers (as well as users), when it comes to marking content as AI-Generated or manipulated, and maintaining this marking across distribution, modification and re-use/remixed? Will deployers face additional duties to mark or detect content beyond those set out in Article 50? And will there be obligations to ensure disclosure signals cannot be removed or altered once attached?

Interaction with existing legislation, content moderation and data access

We trust that the Transparency CoP and its implementation process can help harmonise existing EU regulations and resolve potential overlaps with other transparency obligations established under the Digital Services Act (DSA), GDPR, ePrivacy Directive, and related frameworks. The DSA already includes transparency and content-moderation rules that can apply to deepfakes and synthetic media, which should be closely considered in developing the CoP.

Content moderation for AI-generated material (particularly so-called AI-slop) remains an emerging challenge. With new forms of personalized and individualised feeds, such as those enabled by Sora2 and Vibes, the CoP could set an important precedent by creating a structured dialogue between AI governance, transparency standards and moderation practices. This collaboration would help ensure that evolving moderation frameworks remain effective, consistent and capable of addressing the risks posed by low-quality or misleading AI-generated content.

Data access is another key element of transparency under the AI Act. Connecting Article 50 implementation with provisions such as Article 40 of the DSA, or broader voluntary data-access initiatives, would allow researchers to evaluate effectiveness, bias and discrimination in practice.

Finally, drawing on lessons from the use of data signals in existing moderation systems, it is vital that all actors across the transparency and disclosure pipeline apply globally equitable attention, resources and openness in using these measures. Doing so can help prevent the uneven risk distributions and algorithmic discrimination that have characterised current moderation approaches.

Simplification and the Digital Omnibus

As the EU develops the Transparency Code of Practice, the effectiveness of these technical and legal measures will depend on how consistently the broader digital framework is implemented. The upcoming proposal for a Digital Omnibus initiative will play a major role in that process, determining how Europe aligns, streamlines and enforces its digital regulations, including the AI Act.
WITNESS submitted an input to the Call for Evidence on October 14th, 2025. Our input focuses on the implementation of the AI Act, and draws recommendations that the EU Commission should take into account. We urged the Commission to:
(a) guarantee consistent implementation of the AI Act across Member States;
(b) balance accountability, innovation and users’ rights;
(c) ensure that simplification efforts do not weaken fundamental-rights protections; and
(d) uphold transparency, accountability and multistakeholder participation as guiding principles throughout the Digital Omnibus process.

Recently, in a debate on the Corporate Sustainability Due Diligence Directive (CSDDD) Ursula Von der Leyen, EU Commission President, mentioned for one of the first times the term deregulation as part of the current strategy. This confirms many of the concerns emerging from Civil Society organisations on how the mechanism of Omnibus can be used to weaken the cost of compliance towards current EU regulations in detriment to the protection of fundamental rights.

Rights at the core of AI governance

Across all current EU discussions, from the Article 50 Code of Practice to the Digital Omnibus and broader AI-governance initiatives, WITNESS remains steadfast in the fundamental principles of human rights. We will continue to advocate for transparency and accountability that are both effective and rights preserving. This looks like privacy preserving standards that foster trust, and open and interoperable systems that make these commitments a reality. Simplification can also be a great strength in these regulations, but only if they don’t dilute the protections of people and democracy. We need regulations to enhance access to justice and rights, especially for the most vulnerable and marginalised and not to remove them.

AI is increasingly becoming the defining reality of much of communication and governance. Now is the time where Europe can lead by example and build the frameworks to protect truth, empower users and ensure technology serves society. WITNESS will continue to fight within the EU and beyond to ensure this vision is a reality.

TAGS



Top

Join our fight to fortify the truth and defend human rights.

Take me there

Support Our Work!