Privacy-First Transparency: WITNESS Response to the First Draft EU AI Act Code of Practice
When you interact with a chatbot, view a deepfake video, or encounter AI-generated content online, should you know about it? This question sits at the heart of one of the most consequential policy processes currently underway in Europe. Article 50 of the EU AI Act establishes that people must be made aware when they interact with AI systems, including realistic synthetic media.
The decisions being made now will shape not just user awareness, but the very infrastructure of trust in digital content especially during a period of coordinated disinformation campaigns and what scholars have termed as worst case scenarios of “epistemic collapse or fracture”.
Since November 2025, the European Commission has been convening experts from various stakeholder groups regarding Article 50 of the AI Act’s Code of Practice, specifically the Code of Practice on Transparency. The primary objective of this framework is to develop measures that will facilitate the identification of AI-generated or manipulated content, as well as enhance transparency for users and establish clear guidelines for deployers and developers of AI systems.
This Code of Practice will shape how AI tools, from chatbots and generative media to emotion-recognition, biometric categorization and deepfake technologies, inform users when they are interacting with or viewing AI-generated content.
WITNESS has actively engaged in these efforts and commends the European Commission, the AI Office, and the Co-chairs of Working Groups 1 and 2 on the progress achieved thus far. For over three decades, our organization has assisted communities in utilizing video and technology for human rights defense. Over the past eight years, we have observed that artificial intelligence possesses the capacity to both affirm truth and amplify disinformation. WITNESS endeavors to ensure that policies concerning transparency and disclosure regarding real and synthetic content are firmly rooted in human rights principles and address the requirements of critical frontline information actors, such as journalists and human rights defenders.
In our preceding submission, WITNESS urged the Commission to guarantee that the forthcoming Transparency Code of Practice adequately reflects the intricate, multimodal nature of generative AI and its consequences for accessibility, privacy, and the potential for governmental misuse. Building upon this foundation, we wish to articulate some of our central concerns (set out in our January 2026 submission) regarding the current iteration of the Code of Practice on Transparency.
Working Group 1: Rules for marking and detection of AI-generated and manipulated content applicable to providers of AI systems (Article 50(2) and (5) AI Act)
- Multi-layered marking is both feasible and within the scope of Article 50
Some stakeholders have raised concerns that multi-layered marking may be infeasible or fall outside the scope of the regulation. We do not share this view. Evidence from existing products clearly demonstrates that multi-layered marking is technically feasible. This includes tools like Adobe’s Content Credentials, Google’s SynthID (which combines watermarking with metadata), and OpenAI’s implementation of C2PA standards all employ multiple layers of marking simultaneously. Furthermore, it falls within the scope of EU law, which provides for the development of frameworks, standards, and benchmarks to support compliance and ensure enforceability.
Recommendation: While there may be specific instances where multi-layered marking may be impractical or potentially harmful, the chairs could call upon participants to identify such exceptions. Importantly, the existence of these exceptions should not undermine the overall requirement for multi-layered marking, which remains necessary for compliance with the regulation.
- Model requirements are necessary, though responsibility needs to remain with AI systems
Article 50, paragraph 2, assigns legal responsibility to providers of AI systems, not to AI models. Some measures as currently drafted may appear to place obligations on models. However, while responsibility remains with the AI system, certain requirements at the model level are necessary to 1) achieve compliance and 2) to enable downstream compliance by deployers.
For example, to ensure that watermarks are robust and reliable as required by the regulation, providers of AI systems may need to incorporate them during model training rather than only at inference. Such model-level measures reflect the current state of the art, without shifting legal responsibility away from the AI system. Article 50, paragraph 1, supports this approach by recognizing compliance through design and development.
Recommendation: Reword the draft to clarify that legal responsibility remains with the AI system, while noting that achieving compliance in practice may require certain model-level measures, acknowledging that such measures may evolve with technological progress.
- Privacy measures should be reflected in this Code
The current draft makes no mention of necessary privacy considerations, something that is required, in one way or another, for all the techniques included in the draft. This is a serious gap. Many of the proposed marking techniques, depending on how they are designed and implemented, carry significant privacy risks, including risks of misuse, profiling, or surveillance in particular the rights to private life and to the protection of personal data (Articles 7 and 8 of the Charter of Fundamental Rights of the European Union). Techniques relying on persistent identifiers or traceability may enable identification, profiling, or cross-context linkability, thereby triggering the GDPR and challenging core principles such as data minimisation and proportionality. In the case of the C2PA, for example, we have led a harm assessment that discusses these risks in detail, including mitigation strategies. This work provides a model for how other standards bodies and implementers can proactively address privacy concerns within transparency frameworks.
Recommendation: At a minimum, the Code should clearly state that:
- Personally identifiable information should not be embedded in markings or provenance data by default;
- Where any user-related or contextual data is strictly necessary, it must be data-minimised, protected, and aligned with existing data protection law, including the GDPR;
- Control over such data should rest with the appropriate data controllers and rights-holders, not be broadly exposed or centralized.
Working Group 2: Rules for labelling of deepfakes and AI-generated and manipulated text applicable to deployers of AI systems (Article 50(4) and (5) AI Act)
- EU Wide ‘information symbol’ Icon
The objective of establishing a standardized approach to a common icon is welcomed, as it may enhance interoperability, support consistent implementation, and avoid the fragmentation of the information ecosystem through the proliferation of multiple icons or labeling schemes.
However, the current drafting should more clearly reflect the diversity of content creation practices, in order to avoid confusion for end users. In particular, the proposed use of gradients or taxonomies (e.g. AI-generated versus AI-assisted content) raises concerns. In the absence of a clear, robust, and commonly understood taxonomy, such distinctions risk placing an undue cognitive burden on end users, who may struggle to interpret the meaning of each category.
At the same time, this approach may impose unrealistic expectations on deployers to determine, with precision and consistency, when content should be labelled, and which version of the label should apply. Inconsistent labelling practices and uneven user interpretation could, in turn, exacerbate confusion rather than contribute to clarity within the information ecosystem.
In this context, the adoption of a standardized, single, and interactive icon—functionally comparable to an information symbol—may be more effective. Such an approach would support user understanding, facilitate alignment across the ecosystem, and complement broader media literacy efforts. The interactive nature of the icon is particularly valuable, as it enables the provision of additional contextual information in a layered manner, allowing users to access further details as needed.
Recommendation: The code should take into consideration the following:
- The avoidance of taxonomy-based icon, in order to prevent overburdening users, in favour of a single “more-information” type icon; and
- Adoption of a standardized, simple and interactive single icon that would provide users with layered content about the media they are exposed to.
- Foresee the possibility of markings removal affecting the deployment of the Common Icon at the deployers level.
Artistic Content
This is one of the types of content with a bit more nuance, including on the labeling efforts. Due to that, we highlight the importance of dealing with labels as an inherent part of the content and recommend that the COP deals with this issue as more than an add-on functionality. A label that feels clear and unmistakable in one setting may become far less noticeable or intuitive when the same content moves to short-form video apps, messaging channels, or platforms with different design norms, or is edited, clipcut and so on. This emphasizes the need for multi-layered marking techniques that upholds the inclusion of a ‘more-information’ icon that does not undermine artistic or satirical content and protects it on the visible or audible layer. It is worth highlighting here again the need to ensure that these techniques, while required, do not infringe on privacy, but rather focus on non-personal provenance.
Recommendation: We, therefore, would like for the COP to deal with the following:
- Artistic and satirical content can still be required to disclose AI-generation or edits without undermining its intent and purpose by enabling a standardized ‘more information’ icon, as opposed to a taxonomy-based icon.
- The label needs to take into account how the content travels or is modified along the chain. A label can be clear and unmistakable in one setting and then become less noticeable or intuitive in other contexts.
Safeguards for privacy and Transparency
As argued in this blogpost, a stronger privacy perspective must be embedded directly into the framework, as the current text does not meaningfully address the enforcement of privacy protections or the risk of expanded surveillance. Safeguards for privacy and anonymity must constitute fundamental, non-negotiable requirements integrated across standards, regulations, tooling, and governance mechanisms, including this Code of Practice. The default condition for compliance should not necessitate the collection of personally identifiable information. End users require enforceable control over the collection, processing, and sharing of their data. Absent explicit limitations and robust enforcement mechanisms, the framework risks compromising fundamental rights under the pretext of necessity.
Innovation must not excuse avoiding existing responsibilities; minimum, present-day expectations should be clearly defined and not deferred based on future tech. The framework’s effectiveness hinges on treating provisions like in-platform detection tools as expected standards, not mere encouragement. Crucially, the drafting process requires genuine multistakeholder input, extending beyond developers to include global academia, civil society, and human rights communities, to ensure diverse perspectives and public-interest concerns shape the framework’s substance and implementation.
We urge the chairs to ensure that the Code maintains its requirements and does not weaken in response to arguments based solely on formal scope, or critiques that it is over-prescriptive or onerous. Clear, prescriptive, and rights-aware provisions are both within the scope of Article 50 and necessary for its effective implementation. The substance of the current draft appropriately reflects these objectives.
WITNESS looks forward to engaging with the upcoming steps of this process.
The article was written by Bruna Martins dos Santos and Jacobo Castellanos from WITNESS’ Technology Threats and Opportunities program.