Push for Deregulation of AI Makes a Concerning Appearance in U.S. Budget Reconciliation Bill

On May 11th, 2025, the U.S. Congress Energy and Commerce committee introduced a Budget Reconciliation Bill with the inclusion of a broad prohibition on state laws or regulations relating to artificial intelligence or automated decision systems. As of May 22nd, the Bill has been approved by Congress and awaits a vote in the Senate. 

The provision in the draft Budget Reconciliation bill states that no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act”. This clause effectively establishes a 10-year federal preemption that prohibits states and local governments from creating or enforcing their own regulations on AI, hindering the efforts taken in more than 45 states and Puerto Rico towards Artificial Intelligence regulation. This move is particularly alarming given WITNESS’s long-standing work documenting how unregulated technology can be weaponized against human rights.

The suggested 10 year preemption of all state AI regulation also undermines states’ rights and public protections in a rapidly evolving technological landscape. Artificial intelligence systems are technologies vastly deployed in today’s society and in things such as law enforcement, hiring, housing, and healthcare. Regulation is often seen as one of the most comprehensive tools for addressing harms that might emerge from these systems such as discrimination, bias, and other types of rights violations. Without the ability to regulate these systems at the local level, states will end up being unable to intervene quickly to protect individuals from harm or misuse, especially in times where federal regulations lag behind. 

The idea that regulation or embedding human rights into the development of new products and technologies is opposed to innovation is a dangerous fallacy. Trust in newly developed solutions can also rise from more robust frameworks that take into account aspects such as privacy, freedom of expression non-discrimination, content authenticity and provenance. Developing accountable, transparent and trustworthy solutions, with companies acting responsibly in addressing the harm stack should remain the focus, together with the newer pushes for innovation. 

At WITNESS we have been following the harms that evolve from the impact of AI on audio and visual media. Dangerous deepfakes -deployed to harm vulnerable communities, elections, or humanitarian efforts, and the erosion of trust inherent in the proliferation of AI produced media has been at the top of our human rights concerns for years. Robust legislation is needed to encode transparency and accountability at the root of AI models that are evolving and being released on a mass scale. These regulations could be enabled at the federal level, however in their absence states should be able to create their own regulation. 

WITNESS has long pressed for these priorities, globally and in the US: 

  • Transparency and disclosure across the pipeline
  • Rights-based, not risk-only approaches
  • Protection from misuse and the ‘Liar’s Dividend’
  • Detection innovation and global access
  • Privacy and opt-out rights in AI training and tracking
  • Support authenticity infrastructure 

If approved by the U.S. Senate, this draft will arrive at a concerning moment where many countries in the world are also discussing governance frameworks for Artificial Intelligence that are able – at the same time – to foster innovation and implement safeguards in order to protect users and the information environment. While the European Union has been producing some of the most robust regulatory mechanisms for governing online technologies, such as GDPR, the Digital Services Act, and the EU AI Act, there has been widespread backlash by tech companies over this “Brussels Effect” in setting global standards and oversight. This has been further exacerbated by the Trump administration’s priorities to roll back regulatory initiatives, starting firstly by the 2023 Executive Order, “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” and supercharging something of a global “arms race” when it comes to AI innovation and investment in the US, which is in turn having concerning ramifications within the EU.

All governments need to recognise, like all new modern technologies before it, AI innovation is not possible without regulation. The best way to protect human rights is through robust regulations. These can take place both at a state and federal level.  Allowing said preemption to take place without clear federal rules could set a precedent applied to other fast-evolving technologies like biotech, quantum computing, or advanced robotics—where local adaptability is critical.

Together with advocates, researchers and many others concerned with the outcomes of the U.S. Policy space, WITNESS urges Congress to remove this clause from the draft bill and work with states to ensure innovation and regulation works hand in hand. Federal consistency in AI policy should not be introduced in detriment to state laws and must take into consideration the need to coordinate innovation and the fast evolution of digital technologies and Artificial Intelligence systems with the proper safeguards to citizens



Top

Join our fight to fortify the truth and defend human rights.

Take me there

Support Our Work!