/ 3 January 2020

Web threatens democracy and must be regulated – without limits on freedom of speech

Spill out: Facebook chief Mark Zuckerberg testifies before the US Senate judiciary and commerce committees on the protection of user data.
Spill out: Facebook chief Mark Zuckerberg testifies before the US Senate judiciary and commerce committees on the protection of user data.

 

 

In October, a confrontation erupted between one of the leading Democratic candidates for the United States presidency, senator Elizabeth Warren, and Facebook chief executive Mark Zuckerberg. Warren had called for a breakup of Facebook, and Zuckerberg said in an internal speech that this represented an “existential” threat to his company.

Facebook was then criticised for running an advert by President Donald Trump’s re-election campaign that carried a manifestly false claim charging former vice-president Joe Biden, another leading Democrat contender, with corruption. Warren trolled the company by placing her own deliberately false advert.

This dustup reflects the acute problems social media poses for all democracies. The internet has in many respects displaced legacy media such as newspapers and television as the leading source of information about public events, and the place where they are discussed.

But social media has enormously greater power to amplify certain voices, and to be weaponised by forces hostile to democracy, from Russian trolls to American conspiracy theorists.

This has led, in turn, to calls for the government to regulate internet platforms to preserve democratic discourse itself.

But what forms of regulation are constitutional and feasible? The US Constitution’s First Amendment contains strong free-speech protections. Although many conservatives have accused Facebook and Google of “censoring” voices on the right, the First Amendment applies only to government restrictions on speech; law and precedent protect the ability of private parties such as the internet platforms to moderate their own content.

In addition, section 230 of the 1996 Communications Decency Act exempts them from private liability that would otherwise deter them from curating content.

The US government, by contrast, faces strong restrictions on its ability to censor content on the internet in the direct way that, say, China does. But the US and other developed democracies have nonetheless regulated speech in less intrusive ways.

This is particularly true regarding legacy broadcast media, where governments have shaped public discourse through their ability to license broadcast channels, to prohibit certain forms of speech (such as inciting terror or hard-core pornography), or to establish public broadcasters with a mandate to provide reliable and politically balanced information.

The original mandate of the Federal Communications Commission (FCC) was not simply to regulate private broadcasters, but to support a broad “public interest”. This evolved into the FCC’s fairness doctrine, which enjoined television and radio broadcasters to carry politically balanced coverage and opinion.

The constitutionality of this intrusion into private speech was challenged in the 1969 case, Red Lion Broadcasting Co vs FCC, in which the Supreme Court upheld the commission’s authority to compel a radio station to carry replies to a conservative commentator.

The justification for the decision was based on the scarcity of broadcast spectrum and the oligopolistic control over public discourse held by the three major television networks at the time.

The Red Lion decision did not become settled law, however, because conservatives continued to contest the fairness doctrine. Republican presidents repeatedly vetoed Democratic attempts to turn it into a statute and the FCC itself rescinded the doctrine in 1987 through an administrative decision.

The rise and fall of the fairness doctrine shows how hard it would be to create an internet-age equivalent. There are many parallels between then and now, having to do with scale. Today, Facebook, Google and Twitter host the majority of internet speech and are in the same oligopolistic position as the three big television networks were in the 1960s. Yet it is impossible to imagine today’s FCC articulating a modern equivalent of the fairness doctrine.

Consider antitrust

Politics is far more polarised; reaching agreement on what constitutes unacceptable speech (for example, the various conspiracy theories offered up by American radio show host Alex Jones, including that the 2012 school massacre in Newtown, Connecticut was a sham) would be impossible.

A regulatory approach to content moderation is therefore a dead-end, not in principle but as a matter of practice.

This is why we need to consider antitrust as an alternative to regulation. The right of private parties to self-regulate content has been jealously protected in the US; we don’t complain that The New York Times refuses to publish Jones, because the newspaper market is decentralised and competitive.

A decision by Facebook or YouTube not to carry him is much more consequential because of their monopolistic control over internet discourse. Given the power a private company like Facebook wields, it will rarely be seen as legitimate for it to make such decisions.

On the other hand, we would be much less concerned with Facebook’s content moderation decisions if it were simply one of several competitive internet platforms with differing views on what constitutes acceptable speech. This points to the need for rethinking the foundations of antitrust law.

The framework under which regulators and judges today look at antitrust was established during the 1970s and 1980s as a byproduct of the rise of the Chicago School of free-market economics.

As chronicled in Binyamin Appelbaum’s recent book, The Economists’ Hour, figures such as George Stigler, Aaron Director and Robert Bork — key players in the Chicago School of Economics — launched a sustained critique of over-zealous antitrust enforcement. The major part of their case was economic: antitrust law was being used against companies that had grown large because they were innovative and efficient.

They argued that the only legitimate measure of economic harm caused by large corporations was lower consumer welfare, as measured by prices or quality. And they believed that competition would ultimately discipline even the largest companies. For example, IBM’s fortunes faded not because of government antitrust action but because of the rise of the personal computer.

The Chicago School critique made a further argument, however: the original framers of the 1890 Sherman Antitrust Act were interested only in the economic effect of large scale, and not in the political effects of monopoly. With consumer welfare the only standard for bringing a government action, it was hard to make a case against companies such as Google and Facebook that gave away their main products for free.

Self-filtering platforms

We are in the midst of a major rethinking of that inherited body of law in light of the changes wrought by digital technology. Economists and legal scholars are beginning to recognise that consumers are hurt by lost privacy and foregone innovation, because Facebook and Google sell users’ data and buy startups that might challenge them.

But the political harms caused by large scale are critical issues as well, and ought to be considered in antitrust enforcement. Social media has been weaponised to undermine democracy by deliberately accelerating the flow of bad information, conspiracy theories and slander.

Only the internet platforms have the capacity to filter this garbage out of the system. But the government cannot delegate to a single private company (largely controlled by a single individual) the task of deciding what is acceptable political speech. We would worry much less about this problem if Facebook was part of a more decentralised, competitive platform ecosystem.

Remedies will be difficult to implement: it is the nature of networks to reward scale and it is not clear how a company like Facebook could be broken up. But we need to recognise that although digital discourse must be curated by the private companies that host it, such power cannot be exercised safely unless it is dispersed in a competitive marketplace. — © Project Syndicate

Francis Fukuyama is a senior fellow at Stanford University and co-director of its programme on democracy and the internet