Meta: We need better content protections (not less) in the age of deepfakes & AI
Meta’s recent decision to allow more hateful and harmful speech online in the name of free speech, as well as to eliminate fact-checking on Facebook, Threads and Instagram is a significant setback. This decision risks serious harm to marginalized and vulnerable communities and compounds emerging risks from AI.
“While fact-checking is not a panacea for truth or trust, it is a critical part of defending fact from falsehood, holding the powerful to account, and fighting a blurring of truth and lies,” said Sam Gregory, Executive Director of WITNESS. “While content moderation decisions can go wrong, more often they are about preventing harm and hate and providing information to people to help make their own judgements on the information they consume. We still need platform accountability and content moderation grounded in global realities and human rights. In the dawning AI age, this need is greater than ever.”
Meta’s decision reflects a disregard for the voices and needs of those most impacted by the threats it poses. Marginalized and vulnerable communities are often the first and most profoundly affected by deceptive and false information and hate speech, as well as by real and coercive censorship by their governments. WITNESS’ allies and partners in Myanmar know that all too well, as do immigrant and LGBTQI+ communities in the US and globally. Rather than creating mechanisms to amplify how vulnerable communities, human rights defenders and frontline fact-checkers protect themselves, fortify the truth and confront lies, Meta’s recent actions go in the opposite direction and exacerbate risks.
For over three decades, WITNESS has helped people around the world use video and technology to defend human rights and share trustworthy information. While digital tools have increased the ability of civic witnesses, journalists, and ordinary people to document and expose abuses, they have also been increasingly weaponized to disrupt civil society, perpetuate hate speech, and endanger rights defenders and journalists.
New technologies, such as artificial intelligence-based tools, have heightened these challenges, enabling the creation of convincing simulations of authentic media, including sophisticated and subtle audio and video manipulations – which can further undermine trust and create new avenues for mis- and disinformation.
“The very same communities WITNESS works with are the ones with the least access and capacity to confront these new mechanisms for creating falsehoods,” said Sam Gregory. “We have already seen how the existing fact-checking community globally is disadvantaged in the fight to detect both AI-generated deception and claims by the powerful that reality has been falsified with AI. Further undermining and attacking them compounds this challenging situation.”
At WITNESS, we believe that the future of AI content creation, dissemination, and moderation – and decisions about how it will help or harm vulnerable communities globally, how safeguards will be implemented, and how authenticity and synthesis are communicated to people – are directly related to the decisions being made today about social media content moderation. Meta’s actions reflect a troubling trend of prioritizing corporate convenience, proximity to authoritarian power, and an unnuanced claim of freedom of speech over legitimate accountability and the safety of marginalized communities.
We are committed to challenging the power structures in the technology sector that perpetuate harm to human rights defenders and community journalists. This commitment is especially urgent as we navigate the implications of recent political shifts, such as the new Trump administration in the US, which is poised to exert significant influence over technology companies and AI governance – as exemplified by Meta’s recent announcement.
Learn more about WITNESS’ efforts to shape technology for human rights here and how you can get involved here.