Blog

The EU GPAI Code of Practice: Progress Made, But the Real Test Lies Ahead

The European Commission published the final version of the General Purpose AI Code of Practice on July 10th. The code, which providers of GPAI models will be signing on a voluntary basis, is responsible for establishing measures that providers can follow in order to achieve higher compliance with the AI Act’s rules for GPAI. While the AI Act rules will enter into application on August 2nd, 2025, the Commission has also established a 2 year grace period for models that are already on the market. At WITNESS, our priority has been to ensure the Code of Practice upholds transparency in a way that facilitates AI detection and content authenticity, preserves the information environment, and includes robust rights protections.

The published version of the CoP ended up covering three main subjects: (a) Transparency, (b) Copyright and (c) Safety and Security. 

“The final version of the Code of Practice represents a welcome and more balanced version of the interests at stake than the previous drafts,” said Bruna Martins dos Santos, Policy and Advocacy Manager at WITNESS, “We continue to be concerned that the interests of the private sector were prioritised over civil society’s requests to strengthen fundamental rights protections. We look forward to the next iterations of these efforts to continue to strengthen rights and accountability as the Code continues to be updated.”  

WITNESS joined other civil society organisations in voicing concerns around the third draft of the Code of Practice in March 2025. We asked the Commission to address three main points following the third draft: (1) the weakened transparency and disclosure obligations, including a retreat from default third‑party evaluation; (2) an overly narrow and easily circumvented definition of systemic risk, threatening to exempt large AI providers from meaningful external oversight; and (3) the erosion of human rights protections, particularly around accountability for content provenance, copyright compliance, and the rights of creators and rights‑holders. 

General Comments about the CoP: Transparency, Copyright and Safety and Security 

The final version of the Code of Practice (referred to here as the Code or CoP) works towards strengthening transparency requirements across the AI value chain. We welcome the fact that providers of GPAI systems must produce detailed, publicly accessible documentation of model architecture, design choices, and high‑level training data summaries whilst balancing trade secrets and intellectual property considerations. Sadly, a more robust third‑party evaluation requirement as part of its risk management framework is missing in the text, which weakens its efforts to uphold rights and prevent harms. 

WITNESS also considers it a positive factor that the code requires high‑capability models with potential systemic risk to undergo regular independent red‑teaming, external stress‑testing, and third‑party audits. Accountability should never solely rely on internal expertise or self‑assessment. Our previous comment that the CoP finds a better balance between fundamental rights and harms, and solidifies the scope to prevent providers from ignoring certain risks.  This is why we salute that the final Code adopts a clearer, more comprehensive approach to defining systemic risk. 

We also urged the Commission to consider the sociotechnical evaluation as an integral component of the risk assessment process and not merely as a secondary or optional commitment to consult. On this note, we welcome the fact that the CoP included broader societal impacts such as threats to safety, democratic integrity, and human rights in the scope rather than limiting it to narrow technical thresholds alone. 

In our previous comment, we also highlighted the need for a more standardized risk assessment process that is not left to the discretion of individual providers. In this sense, we welcome the fact that the CoP introduced continuous post‑market monitoring and mandatory cooperation with the EU AI Office, a point we see as fundamental for further embedding accountability throughout the AI lifecycle.

Lastly, the Copyright Section of the CoP explicitly reinforces protections for fundamental rights and rights‑holders, a point we also consider relevant. Back in March, we urged the Commission to avoid the pursuance of methods that could compromise privacy or misuse metadata. Our concern was that the CoP could inadvertently promote the use of metadata standards for rights management, like the C2PA, which were not designed for this purpose.

The final version of the CoP includes clear commitments to honor copyright frameworks by respecting machine‑readable opt‑outs (e.g., robots.txt) and provides complaint channels for rights‑holders seeking redress. It also mandates transparency about model capabilities and limitations, helping downstream users and the public assess content authenticity and reduce risks of misuse or deception.

Ensuring equal Participation in broader EU processes 

Although the final result was a robust code, we must not forget that this specific initiative was the subject of reinforced calls for de-regulation and stronger scrutiny from the private sector that – at times – might have risked the further introduction of fundamental rights, democratic values, and the integrity of creative ecosystems into the text. And all at the cost of a blind call for innovation without guardrails. 

As the implementation of the AI Act begins, we urge the Commission to work with civil society and academic organizations in the code update process but also in other upcoming processes steered by the Commission. We echo the call by other civil society and academics that the three separate sections of the code should be updated separately -almost as three separate codes.

We must evolve towards truly inclusive and transparent processes that ensure the application of multistakeholder principles towards EU policy processes in detriment to the closed-doors process conducted with industry that took place during the code drafting period.  

Takeaways

The final Code of Practice marks a significant step toward embedding transparency, accountability, and fundamental rights into the governance of general‑purpose AI systems. Some of the promoted updates demonstrate an important answer to civil society’s calls for stronger and enforceable standards, with an important acknowledgement that rights‑based safeguards, external evaluation, and comprehensive risk management are also necessary. 

Although the final outcome was a relevant one, this code remains a voluntary measure until the application of the AI Act goes into force. Therefore, we urge developers of GPAI models to adhere to its text, showing a stronger commitment to the idea that AI governance should always take into account values such as transparency, human oversight, and public accountability. 

Lastly, we trust that the multistakeholder approach should be at the very core of the update process of the CoP and upcoming Commission processes. In this sense, we invite EU policy makers to work together with Civil Society and Academic organisations, listen more closely to our concerns and make sure they are also incorporated within the broad spectrum of processes. As the implementation phase of the AI Act begins, the scrutiny by civil society, researchers, and journalists will be essential to ensure these commitments are translated into practice.

TAGS



Top

Join our fight to fortify the truth and defend human rights.

Take me there

Support Our Work!