Digital platforms confront fake news
In 1995 – three years before Google was founded, nine years ahead of Facebook, a decade before YouTube and 11 years earlier than Twitter – a US court judged that internet-based Prodigy Services was akin to a publisher because the web-services company vetted and deleted inappropriate material from online message boards that attracted about 60,000 posts each day. The ruling meant that companies that interfered with content were liable for all material on their websites, whereas passive hosts of content were not.
Two US lawmakers, concerned the ruling would stifle innovation, introduced an amendment to the Communications Decency Act to ensure “providers of an interactive computer service” were not liable for what people might say and do on their websites. The amendment contrasted with how publishers and broadcasters are legally accountable in the US and elsewhere for the content they make public in traditional or online form.
The amendment, which became Section 230 in the Telecommunications Act of 1996 (as is known as CDA 230), enabled companies such as Facebook, Google, LinkedIn (Microsoft owned since 2016), Reddit, Snapchat, Tumblr, Twitter and YouTube (Google owned since 2006) to emerge as human ingenuity allowed.
But the growth of these companies seems to have outpaced their ability to police misuse of their products without them incurring any legal penalty. Technology is a neutral instrument in a moral or ethical sense but the distinction blurs amid ill use by even a few bad actors. The internet’s drawbacks include that it encourages excesses to get attention and it can be a tool for extremists. Across the platforms the world over, fake news, the manipulation of algorithms to promote articles to ‘trending’ status, troll armies, bogus ‘likes’, web-based smear campaigns and viral conspiracy theories have hyped partisanship, cheapened facts and amplified the role that emotion has played in discourse on these for-profit ‘public squares’ such that social media is accused of being a ‘threat to democracy’.
Moves are underway in the US to extend to the internet regulations that govern political advertising in traditional media. While regulation of political ads is possible, US lawmakers are restrained when taking on the tech giants on content for two main reasons. The first is that the products of these companies are beloved by their billions of users so anything that would disrupt these services would prove unpopular. The other is that digital platforms are difficult to regulate, no matter their size, because they are different from traditional publishers and broadcasters. The content-heavy business models of the platforms are likely safe for now.
That said, the tech companies (as distinct from their products) have shed much goodwill in recent years as these and other controversies have swirled. In many ways, the influence of the internet on politics is exaggerated. At worst, the platforms have magnified conflicts, not caused them. But with so many controversies raging of late, the platforms are under pressure to limit abuses on their inventions that have a more sinister side than their creators perhaps expected. If the platforms don’t assume more control, regulators the world over will force them to.
Content challenges
The digital platforms are responding. Facebook has cracked down on false accounts and is taking steps to reduce fake news. Google is curbing problematic searches and trying to blacklist ‘authoritative’ content. Search ‘did the Holocaust happen’, for instance, and denials no longer appear on the first search page. Twitter has introduced rules around hate symbols, revenge porn and the glorification of violence and is supressing bots that mass tweet to game trending topics. Reddit is seeking to rid the internet forum of content that incites violence. Tech companies have teamed with G7 countries to block extremist Islamist content.
But the platforms face challenges to diffuse the controversies around content. To maximise user numbers and time on site, Facebook algorithms are coded to send people content that inspire ‘comments’, ‘likes’ and ‘shares’ with friends. Users end up fragmented into like-minded clusters that make it easy to share agreeable and fake news. On top of that, objective news stories to some are biased to others. Facebook, for instance, is accused by former Facebook staff of supressing news stories that would please conservatives on the influential ‘trending’ sidebar on user home pages, an allegation a Facebook investigation disputed.
An overarching challenge for the platforms is how to balance the trade-off between controlling content and keeping their networks open to all to preserve free speech. When Facebook, Google or Twitter censor something they often only promote a backlash and stir debate about why they have a power that belongs to governments. Twitter, for example, was criticised when it hobbled actor Rose McGowan’s account at a pivotal moment in the scandal surrounding Hollywood producer Harvey Weinstein after she attacked men distancing themselves from Weinstein by tweeting: “You all knew”. To limit hate speech, Google has partnered with the controversial Southern Poverty Law Center, which condemns many groups and individuals on disputed rationales. Google is accused by left- and right-wing fringe political outlets of censorship by tampering with search results to suppress visits to their sites. The opaqueness of how Google derives its search results only inflames its opponents. Google says it keeps its search algorithms secret so people can’t manipulate them.
Regulatory challenges
While legislation on political ads stands a fair chance of being passed, the challenge for lawmakers on content remains that the internet is “unique” and “distinct”, as the US election body put it. They are not publishers or broadcasters even though many people go to them for their news. While Facebook CEO Mark Zuckerberg concedes misinformation on Facebook may have influenced the US election, Facebook, for instance, argues it is not a media company, even with its News Feeds, a stance that implicitly means the company deserves the CDA 230 protection. “We’re a tech company. We don’t hire journalists,” Facebook COO Sheryl Sandberg said recently. Twitter likewise forswears any ability to regulate content on such an open and real-time platform, though Snapchat, which operates the Discover publisher portal, says it’s a publisher.
The tech industry overall says that CDA 230 is a needed protection for online services that provide third-party content and for bloggers who host comments from readers. Without the exception, sites would either forgo hosting content or be forced to ensure content didn’t breach laws – a claim that would apply differently across the platforms. “Given the sheer size of user-generated websites …, it would be infeasible for online intermediaries to prevent objectionable content from cropping up on their site(s),” says US tech lobby group, the Electronic Frontier Foundation. “Rather than face potential liability for their users' actions, most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, what we see, and what we do online.”
The traditional media derides the tech line as too narrow a definition of a publisher or broadcaster – see WIRED’s “Memo to Facebook: How to tell if you are a media company”: Are you the country’s largest source of news? Do you commission content? Employ content moderators? Censor content? Use fact checkers? Does your CEO sort of admit to running a media company? Have you partnered with a media company to attract viewers? Yes, yes, yes, yes, yes and yes, WIRED concludes.
The solution for US politicians would seem to be to impose content rules on the digital platforms that are forceful but less stringent than those governing traditional media. Germany’s new Network Enforcement Law is a portent of regulation to come – it is regarded as the toughest of laws passed recently to regulate internet content in more than 50 countries by the count of The New York Times. Under the German law effective from October 1, digital platforms face fines for hosting for more than 24 hours any content that “manifestly” violates the country’s Criminal Code, which bars incitement to hatred or crime.
In the US, a workable compromise on regulating content could take time, even years, to work out. With the public still enamoured with their favourite platforms, the tech companies will enjoy for a while yet the protections that flowed from that US court case 22 years ago. How soon the day arrives before such protections are watered down could boil down to how well the tech giants police their platforms from here.
By Michael Collins, Investment Specialist
For further insights from Magellan please visit our website
4 topics