Meta’s algorithms have deprioritised news content, reducing organic reach and referral traffic for local publishers.
(Photo by Jakub Porzycki/NurPhoto via Getty Images)
Meta Platforms recently decided to ease content censorship under the banner of freedom of speech, sparking an important discussion about the intersection of human rights and technology.
In principle, this move might seem like a victory for expression and democratic ideals. However, in practice, it feels like a gamble with far-reaching consequences that could reshape the digital public square in ways we are unprepared to handle.
As a black woman who grew up in South Africa, I have witnessed firsthand the dual power of speech, its ability to challenge systems of oppression and its capacity to wound and marginalise.
Social media, one of humanity’s most revolutionary tools for expression, has made these realities more visible than ever. It amplifies voices but also creates fertile ground for harmful behaviours.
Meta’s decision to scale back moderation might seem noble on the surface, but it requires deeper scrutiny in a world where platforms, rather than people, often dictate which voices are heard.
Freedom of speech is a cornerstone of democracy, empowering individuals to hold leaders accountable, demand justice and speak truth to power. However, as this principle migrates into the digital realm, it encounters new challenges shaped by the very platforms that claim to uphold it.
Meta’s algorithms, for example, are not neutral. They are designed to maximise engagement, often by surfacing content that provokes strong reactions. This means divisive, sensational, or even harmful, posts are amplified beyond their organic reach.
Under the guise of free speech, these algorithms risk creating an environment where harmful rhetoric and disinformation thrive unchecked. In this context, “freedom of speech” begins to feel less like a right and more like a weapon.
New technologies are reshaping how we experience and exercise fundamental rights. Social media platforms like Meta were initially heralded as tools of liberation, empowering marginalised voices and democratising access to information.
However, as these platforms have matured, so too have the complexities of their impact on human rights. Content that fuels division or perpetuates violence garners more visibility because it engages audiences. Algorithms prioritise these posts, not for their truth or value, but because they keep users scrolling.
Constant exposure to harmful content desensitises audiences, making violence — whether physical, verbal or structural — appear less shocking and more acceptable. Social media feeds reinforce existing beliefs, creating digital echo chambers that can radicalise individuals and deepen societal divides. The protective shield of online anonymity emboldens individuals to engage in harassment, threats and other harmful behaviours with little accountability.
In light of these realities, we must rethink what freedom of speech looks like in an era dominated by algorithms and digital platforms. Meta’s decision to relax content moderation policies offers a chance to reflect on the broader responsibilities of tech companies, governments, and society.
Free speech cannot exist in isolation from other rights, such as the right to safety and dignity. Platforms must balance their commitment to expression with their obligation to protect users from harm. Content moderation is not the enemy of free speech; it is a tool for accountability. Platforms must invest in transparent and equitable systems to address harmful content without silencing dissenting voices.
Users also need tools to navigate online spaces critically, understanding how algorithms shape content and recognising the difference between fact and manipulation. Additionally, platforms must reimagine their algorithms to prioritise fairness and accuracy over engagement metrics. Without this shift, the digital public square will remain a space dominated by the loudest, not the most truthful, voices.
Meta’s decision to champion freedom of speech reflects a broader struggle to define human rights in the digital age. Users must hold these platforms accountable, questioning not only their policies but also the values they prioritise.
For Meta and other tech giants, the challenge lies in acknowledging that with great power comes not just great responsibility, but also a moral imperative to protect the vulnerable while empowering all. Freedom of speech remains one of humanity’s most cherished ideals, but it is not absolute; it exists within a framework of other rights.
In this increasingly digitised world, we must ensure that the right to speak does not come at the cost of the right to live free from harm. Failing to strike this balance risks turning the promise of free speech into an empty slogan.
The debate over Meta’s approach to content moderation is far from over. As technology continues to evolve, so too must our understanding of digital rights. The challenge now is to ensure that the principles of democracy and human rights remain intact in the age of algorithms. Meta’s gamble on free speech might redefine the digital landscape — but whether it does so for the better remains to be seen.
Ian Mangenga is the founder of digital community incubator Digital Girl Africa.