On Feb 21, the US Supreme Court began hearing oral arguments in a case known as Gonzalez versus Google. The case involves the family of an American woman killed in the 2015 terrorist attacks by ISIS in Paris. The family claims that Google, which owns YouTube, bears responsibility for the automated process that recommends videos. That includes those that could contribute to radicalisation.
A day later, on Feb 22, America’s highest court heard arguments in another case, Twitter versus Taamneh. A terrorist attack on the Reina nightclub in Istanbul had killed Nawras Alassaf, a Jordanian. His relatives — including Mehier Taamneh and others — sued the microblogging platform Twitter in the US, which they said could remove tweets and accounts of the terrorist but did not do so proactively.
Both cases examine whether internet firms should be liable under federal anti-terrorism law when they fail to purge terrorist content from their site or promote content that radicalises terrorists. At the centre of both cases is Section 230, a 26-word 1996 US law that “created the internet as we know it” and has shielded internet firms like Google, Facebook, Twitter, Instagram and others from liability for content on their sites.
Why is it such a big deal? In 1996, US Congress passed the Communications Decency Act to provide limited federal immunity to internet companies, and Section 230 of the Act allows users and providers of “interactive computer services” or internet platforms to make their own content moderation decisions while permitting liability in certain limited contexts. Section 230 treats internet firms like telephone companies in that they are not liable for the content that they carry.
If you telephone a friend on the other side of town or indeed the other side of the world and say something rude about me, or for that matter any other person, the phone company can’t be sued for carrying whatever you may have said, irrespective how rude or libellous or wrong it was. The phone companies are just message carriers and can’t be blamed for the content. Section 230 updated the 1934 Communications Act, including the immunity for internet companies that emerged in the mid-1990s. The idea behind Section 230 was that the internet platforms were far better suited to come up with the rules of the new road than the government.
I am no legal expert, but I have spoken to many lawyers specialising in communications law in recent weeks. What Section 230 effectively does is it make players like Alphabet, which owns search giant Google and video sharing platform YouTube, or the social media behemoth Meta Platforms, which owns Facebook, Instagram and WhatsApp, become media companies without the onerous responsibilities of traditional media players like publishers of newspapers or the owners of TV channels or radio stations.
See also: UBS-Credit Suisse integration opens up new tech for bigger plans
While Section 230 precludes internet firms from being held legally responsible for any information provided by a third party of users, it does not prevent them from being held responsible for information they may have developed on their own or for activities unrelated to third-party content. But then again, Facebook doesn’t produce any content, and YouTube doesn’t make videos. They only provide a platform for users to upload videos or post content.
Not media companies
The videos on YouTube and photos on Instagram don’t make them media companies in the eyes of the law. Section 230 specifies that internet firms may not “be treated as the publisher or speaker of any information provided by another information content provider” and bars all “lawsuits seeking to hold a service provider liable for its exercise of a publisher’s traditional editorial functions — such as deciding whether to publish, withdraw, postpone or alter the content.”
See also: Google arguments draw scepticism from judge in ad tech case
It also clearly states that internet firms and users can not be held liable for voluntarily acting in good faith to restrict access to “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable” material. Lastly, Section 230 applies only to good faith takedowns of objectionable materials.
I can’t write anything in the traditional media to malign anyone or publish obscene, lewd, violent, harassing or objectionable materials. Even if I did try, the editor and the publisher would remove words, photos or illustrations that they believe are maliciously objectionable or promote violence because they would likely be sued.
However, anyone can freely post on YouTube, Facebook, Instagram and TikTok. Facebook, YouTube, Instagram or Twitter are immune from any lawsuit.
There are, of course, some minor caveats. Immunity generally does not apply to some cases related to national security or sex trafficking. But in general, “Section 230” has enabled platforms like Facebook and YouTube and third-party users like you and me to get away with anything on the internet.
It is important to understand how we got here. In the early 1990s, the internet spread like wildfire only because anybody could put just about anything online. With a newspaper or a TV program, there are always reporters, editors and producers picking and choosing what is fit to publish and trying to sanitise content to protect themselves and their organisations from any legal harm. The internet, on the hand, from the very start, was this freeform, anything-goes platform that was tailor-made for the new attention economy.
To attract maximum attention, all you needed was to post something provocative. The more provocative it was, the more attention it got, and the more attention it got, the more advertisers got lured in, and the more money platforms like Google or Facebook or TikTok had, which in turn meant they had more money to plough back into growing the ever-expanding attention economy. Alphabet had revenues of US$282 billion ($318 billion) last year (84% from advertising) or the 14th largest listed company measured by revenues. Meta’s revenues topped US$116 billion (over 95% from advertising) in 2022. Ironically, Alphabet barely grew, and Meta’s revenues declined last year (TikTok’s ad revenues were US$9.9 billion last year, almost all of which came from advertising).
Let me put all this in perspective. Last year, global newspaper advertising was estimated at just under US$28 billion. Alphabet alone rakes in about ten times the advertising revenues than all of the newspapers in the world combined. That includes advertising newspapers generate from their websites and print editions.
Sink your teeth into in-depth insights from our contributors, and dive into financial and economic trends
Social media, not traditional media
YouTube and Facebook can attract much more attention than your hometown newspaper. Media owners, newspaper publishers, TV channels and radio station owners will tell you that they have lost readers, viewers, listeners, and advertisers over the past two decades because Section 230 protects social media but not traditional media. It’s not a level playing field.
Suppose you have been following the US Supreme Court hearings. In that case, you are probably aware that the nine judges, in their remarks, remain hesitant to break the delicate balance set by Section 230, which has, until now, protected the internet players from liability for their users’ posts.
If judges are unwilling to intervene, the ball will fall back in the legislators’ court in the Republican-controlled US House of Representatives and the US senate, which the Democrats control. The problem is that both have very slim majorities in the legislature under their control.
But politicians on both sides of the aisle hate Section 230 for different reasons. Republicans believe the law gives tech companies like Facebook and Google too much say over what people see online, while Democrats have moaned that it gives tech firms a license to allow hate speech and disinformation.
One big impediment in amending Section 230 is the Free Speech clause of the First Amendment to the US Constitution which limits the government’s ability to regulate speech. There are a couple of issues here. Firstly, whether any amendment to Section 230 infringes the constitutionally protected speech of either internet firms or users of internet platforms.
Even if Section 230 is somehow repealed — whether entirely or partly — the First Amendment of the US Constitution will still prevent the government or private litigants from holding internet firms liable for hosting users’ content.
On Feb 28, a bipartisan group of US Senators and members of the House tabled two separate but fairly similar bills in the two legislatures to reform Section 230. The amendment to 230, called Safeguarding Against Fraud, Exploitation, Threats, Extremism and Consumer Harms (or Safe Tech), would allow internet firms to be held accountable for enabling cyber-stalking, online harassment and discrimination.
“For too long, Section 230 has given cover to social media companies as they turn a blind eye to the harmful scams, harassment, and violent extremism that run rampant across their platforms,” Democratic Senator Mark Warner of Virginia, one of the bill’s sponsors said in a statement last week. “When Section 230 was enacted over 25 years ago, the internet we use today was not even fathomable,” he said. “This legislation takes strides to update a law meant to encourage service providers to develop tools and policies to support effective moderation and allows them to finally be held accountable for the harmful, often criminal behaviour on their platforms.”
The way Warner sees it, while the Supreme Court is debating the future of Section 230, there is a solution in Congress in the form of legislation. “This legislation will hold platforms accountable for the ads and content they peddle that have real-world consequences. Regardless of the Court’s findings, it’s clear that we need to act to rein in these companies that have used Section 230 as a shield for far too long.” Warner also said that the Act would force online service providers to deal with improper use of their platforms. Those that don’t could now face civil liability, which wasn’t possible previously.
The Safe Tech Act clarifies and updates Section 230 in other ways. Advertising and other paid content are not protected, eliminating the cover of misleading content, scams and fraud. It also clarifies that injunctive relief is not protected, allowing consumers to take legal action when content on a provider’s site is likely to cause irreparable harm.
So what will happen? It is unlikely that the US Supreme Court will outlaw Section 230 because its focus would be on the Free Speech clause of the US Constitution. The court could, however, focus narrowly on whether exceptions should be made for terrorism. It is highly unlikely that Congress will pass any landmark amendment to Section 230 at any point soon. The narrow majority in the legislatures means there will be too many compromises in the Senate and House, watering down the original bills. A more far-reaching amendment to govern the internet may have to wait until after the US presidential elections in 2024.
Assif Shameen is a technology and business writer based in North America