From Catch 22 to Catch 230: Facebook's free speech loophole
Magellan
Orr was crazy. That meant he didn’t need to fly any more US combat missions in World War II. The rules said all Orr needed do to be grounded was to ask. But, as Doc Daneeka explained to Yossarian, there was a catch. Anyone who wanted to get out of combat wasn’t crazy. “That’s some catch, that Catch-22,” he (Yossarian) observed. With this scene in his bestseller of 1961 of the same name, US author Joseph Heller coined a term to describe dilemmas and absurdities arising from conflicting circumstances.
Historian Niall Ferguson describes ‘CDA 230’ as the most important Catch-22 of the internet age. The letters and numbers are short form for the law that, due to conflicting definitions of the status of platforms, grants social-media networks such as Facebook legal immunity for the content they host and the latitude to decide what content they won’t host.
The Catch-22 came about when US policymakers decided internet-services providers needed protection after a court in 1995 ruled an internet company was liable when a user defamed a bank on its message board. The resulting section 230 of the Communications Decency Act of 1996 cleared the way for the rise of platforms, including Facebook and YouTube, dedicated to user-generated content.
In its most significant part, CDA 230 instructs that platforms are not publishers. This largely absolves them from any harm caused by content posted or shared by users, whereas traditional publishers are responsible for the content they and their users publish offline and online. The other key part of CDA 230 conveys that platforms are publishers. This gives them the right to censor content and bar people they find objectionable.
The greatest threat to the ever-more-influential platforms is Washington repealing the immunity for content because platforms would then need to vet every post they host to ensure they complied with the law and were not defamatory – Facebook, for instance, would have more than three billion users to oversee. Abolishing the ability to remove objectionable content wouldn’t maul the viability of platforms, just encroach on their rights as private entities.
The threat to the CDA 230 immunities ballooned after supporters of former president Donald Trump stormed the Capitol in January to protest the validity of the 2020 election. Apple, Alphabet (owner of Google), Amazon, Facebook, Twitter and others highlighted their quasi-censorship powers under CDA 230 when they banned Trump for “fanning the flames” and disabled some conservative sites such as Parler. At the same time, the incident highlighted their legal immunity for allowing the sharing of misleading information about the election and for hosting groups such as ‘Stop the Steal 2020’.
By the end of April, the number of proposals to amend CDA 230 circulating in Washington stood at about 40. A bill co-sponsored by Democrat Senator Mark Warner, for example, makes it easier for people to seek legal redress if content abuses, discriminates, harasses or threatens physical harm and the platforms take no action. “How can we continue to give this get-out-of-jail card to these platforms?” Warner asks.
The senator would know the answer. When it comes to the restrictions on content, authorities are stymied due to free-speech protections. Policymakers grasp that limiting the protections would maim platforms such as Wikipedia that are not accused of harm. They know it would make it prohibitively expensive for platforms to manage the ratings and reviews that users value that are found on online services such as AllMusic, Amazon and Tripadvisor. Lawmaker calls to abolish the online anonymity that shields trolls succumb too to free-speech protections against the “tyranny of the majority”.
The quandary officials need to solve when it comes to online platforms removing any content they find objectionable is that cyberspace is the modern public square. Platforms determine the line whereby free speech crosses into unacceptable speech – a line that is often arbitrary. Government cannot intervene when Silicon Valley blocks views it might consider ‘hate speech’ that offline is protected free speech that many people might find acceptable.
Two other concerns snooker policymakers. One is that Washington needs Big Platforms to beat back China’s drive to dominate the technologies of tomorrow. The other is that democratically elected politicians are reluctant to sabotage platforms that voters view as essential and harmless when it comes to their use and businesses regard as valuable advertising tools. The likely outcome? Platforms will retain legal immunities not available to others.
The CDA 230 content immunity, of course, is not absolute. Platforms are liable for content they create. They must not abet crime. More laws have appeared to force platforms to remove user content when they have been informed of its illegality. Platforms need to obey content-related laws such as copyright. Traditional media and publishers enable plenty of mischief, for all their legal responsibility. Lawyers argue over the constitutionality of CDA 230 so maybe one day a court, rather than politicians, will torpedo it. It’s true too that CDA 230 has never been more vulnerable because the conservative side of US politics reckons that Big Platforms unfairly silence it, while the liberal side thinks the platforms are not doing enough to remove what it judges to be hate speech.
Even though most politicians are unhappy with the lack of accountability stemming from CDA 230, the left-right disagreement on whether to prioritise fighting fake news or protecting free speech is another impediment to any action because it makes building a consensus harder to achieve.
The CDA 230 protection will stay. As will the controversies swirling around the Catch-22.
Owning up
Personal responsibility has been the cardinal principle of ethics since Aristotle founded analysis along moral lines in ancient times. The concept that people are responsible for the consequences of their behaviour is a cornerstone of legal systems. But there are exceptions. ‘Diminished responsibility’ is an accepted defence in criminal cases for people under duress, the mentally handicapped and the insane. But mostly for all others (allowing for exceptions like diplomats), individuals and companies are responsible for the actions they take.
In this age when social media is so popular and influential, it galls many that no one seems to be responsible for the misinformation that often spreads rapidly and widely in cyberspace. Some have blamed the social networks for intentionally and cynically ignoring misinformation on their platforms, suggesting that higher engagement enables them to sell more ads, a claim that the social networks reject. In any case, viral misinformation has been blamed for everything from genocide to youth suicides; from encouraging anti-vaxxers to fostering the polarisation that the authors of How democracies die say “challenges US democracy”.
Some say the solution is ending CDA 230, come what may to the platforms. Shunning this path, politicians have encouraged the companies to better regulate themselves (which they have done) while tightening content exemptions to the CDA 230 immunity. In 2018, for example, US lawmakers excluded laws against sex trafficking from CDA 230 protections, proposals the platforms initially opposed due to the existentialist threat that overturning CDA 230 poses.
Some of the proposals circulating in Washington nowadays can be grouped into a push to carve out more exceptions to the CDA 230 immunity. Warner’s bill is one example. But it seeks only to remove CDA 230 protections when paid content abuses or seeks to defraud people.
Other lawmaker proposals can be grouped into proposals that seek to enforce requirements that online companies must fulfil certain conditions to gain the immunities. These conditions are generally along the lines that platforms must report and remove criminal-related activities. Facebook CEO Mark Zuckerberg appears to back such measures as part of “thoughtful reform” of section 230. “We believe Congress should consider making platforms’ intermediary liability protection for certain types of unlawful content conditional on companies’ ability to meet best practices to combat the spread of this content,” he said in March in a submission to Congress before a hearing on the events of January 6.
Some US lawmakers say outlawing anonymity might be another way to improve content on the internet. But this may lead nowhere because courts have previously ruled the US constitution supports anonymity as an “honourable tradition of advocacy and dissent”.
All in all, while there will be incremental reform on transparency on advertising and liability for not removing banned material (terrorism and child pornography), no major limitations are likely to be placed on the content immunity platforms enjoy.
Private control
In 2017, the day after the deadly ‘far-right rally’ at Charlottesville, Matthew Prince, the CEO of US-based internet-service provider Cloudflare, woke up “in a bad mood”. In a memo to staff, Prince recounted that on the rationale that “the people behind the Daily Stormer are assholes” he disabled the website of the white supremacist magazine, as Cloudflare’s terms of service allow. “No one should have that power,” he admitted.
But most privately owned businesses have always had the power to withdraw their services. (Telecoms, for instance, can’t discriminate under common carrier laws, nor can utilities.) It’s just that the right of private entities to control their services matters more on the internet because a few private companies control cyberspace. A smattering of CEOs can thus cancel anyone or censor anything. This power became apparent when, after the Capitol was stormed, Big Platforms blackballed the sitting (even if outgoing) US president and crippled internet communications for many of his supporters by disabling the micro-blogging site Parler. Each platform justified Trump’s suspension on his role in inciting the violence at the Capitol, and the potential for further trouble ahead of the transfer of power to President Joe Biden. Facebook’s Oversight Board, a committee appointed by the company to provide recommendations on content, in May upheld the ban on Trump.
The decisions to electronically silence Trump have startled many from all sides of US politics. The progressive American Civil Liberties Union warned of Big Tech’s “unchecked power to remove people from platforms that have become indispensable for the speech of billions”.
While platforms have the legal right to ‘deplatform’, they added to their political liability that Republicans and their allies eye them as opponents. But there’s little even Republicans back in power could do to ensure their presence on social media or to limit the platform’s power over content decisions.
After all, they couldn’t do much while in power. In 2020 when Twitter attached truth warnings to Trump’s Tweets, he could only respond with a hollow executive order about “preventing online censorship” while calling for CDA 230 to be revoked. The then-Republican-controlled Senate couldn’t do more than stage the political gesture of subpoenaing Big Platform CEOs to appear before the chamber.
Reducing the power of platforms to make unilateral content decisions may be impossible because free-speech legal protections such as the First Amendment in the US are aimed at limiting the reach of governments. When it comes to private entities or individuals, these protections stop the government from forcing them to associate with speech they oppose.
Congress can thus no more oblige Twitter to host the US president du jour than it can compel The Washington Post to publish White House media releases. Who wouldn’t think that would be crazy?
For all article sources, please visit our website
Never miss an insight
Stay up to date with all my latest content by clicking the follow button below, and you'll be notified every time I post a wire.
Michael Collins is an investment specialist at Magellan. Michael has worked as an investment specialist/commentator for money managers in Australia since 2000. Before that, Michael worked for 14 years as a business journalist for mainstream...
Expertise
Michael Collins is an investment specialist at Magellan. Michael has worked as an investment specialist/commentator for money managers in Australia since 2000. Before that, Michael worked for 14 years as a business journalist for mainstream...