Could SCOTUS break the Internet? A discussion about technology and free speech
Free speech is a hallmark of the United States. The growth of social media has thrust this fundamental right into a highly polarized topic in political discourse today. As has been the case throughout history, after every technological revolution it takes a while for society to adjust and create norms of the new era.
In the early days of the Internet revolution, very new-to-the-scene platforms were caught in litigation which held them responsible for what users posted on their platforms. U.S. Representative Chris Cox and then–U.S. Representative, now Senator, Ron Wyden feared that finding platforms liable for content their users created would lead to not only extreme limitations on what people could create, but also the inability to innovate using this new technology. This environment was creating a lack of incentive due to fear of litigation, and that’s why they wrote Section 230 of the Communications Decency Act, which was passed into law in 1996.
Section 230 states that platforms are not liable for third party content (or more commonly called user content) and they are free to moderate content as they see fit. With Section 230, the Internet has flourished, and innovation took an even bigger leap.
Now in 2023, nearly 30 years after Section 230 was enacted, the Supreme Court handled cases that could have defined where the law’s protections end. The recent SCOTUS cases, Gonzalez v. Google and Twitter v. Taamneh, were heard by the Court and decided in May 2023. As it stands, Section 230 remains unchanged — for now.
The implications of the Court’s ruling and its potential impact on First Amendment rights are nuanced. To understand them, Stand Together Trust spoke with Ashkhen Kazaryan, Senior Fellow on the Free Speech and Peace team for Stand Together Trust. Prior to joining Stand Together Trust, Ashkhen led content regulation policy for Meta in North and Latin America and was its policy lead on Section 230. Before that she was Director of Civil Liberties at TechFreedom.
Stand Together Trust: Some people hold the belief that unregulated speech online poses significant threats to civil society. They argue that the best approach is for either companies or the government to set rules that shut down any speech or communications they determine to be harmful. What do you say?
Kazaryan: I think when it comes to the Internet, the biggest thing that differentiates it from all the previous ways of communicating and all the previous revolutions spurred by new technology is the scale of speech that we now are faced with. The sheer size of the Internet has benefits and harms that come with it.
One of the benefits is how many more people any one person can reach. Individuals now have greater ability to share their views and hear from others – whether it’s about social causes, new business, or niche hobbies and passions. For example, movements like the Tea Party, Me Too, or Black Lives Matter, would have not been able to catch on and organize at the scale and speed that they did if it wasn’t for the protections in Section 230.
New technology also comes with costs. However, many of the harms that we see related to the Internet aren’t Internet-specific issues. There are underlying societal issues that we need to address, like growing polarization, lack of social trust, and challenges to collaborating to solve problems in society.
The Stand Together community is investing in addressing those challenges at their roots.
In the meantime, we’re working to protect against bureaucratic overreach that could slow or stop the kind of innovation necessary to solve the tech-specific problems.
If we put mechanisms in place that would give politicians the ability to reshape what we see online, well, we wouldn’t be far from living in the 1984 world of George Orwell. To give politicians the power to silence the ideas they don’t like is not the way to go. It definitely isn’t the way we as a country have taken previously. There is without a doubt more free speech globally because major platforms and tech companies started in America and led with American values. From the “Arab Spring” in early 2010s to Russian dissidents fighting the disinformation that the Russian government is spreading through the de facto nationalized channels to most recent Mahsa Amini protests in Iran. Free speech finds its way to people.
The United States has been a leader in protecting free speech as new revolutions take off. If we work against that, not only will we fall behind as a country, but we will also set a bad example for countries where democracy doesn’t exist, or democracy is trying to develop.
So, let’s talk about the recent technology and free speech cases at the Supreme Court. It seems relatively early in the digital revolution for the highest court in the land to be setting precedent, considering we’re still, as you said “trying to figure out what’s what” with a lot of technology. How did we get here?
A lot of these issues of speech online have been boiling over for the past decade. As technology has been developing, our society has been adjusting to this new digital age.
Court cases are now facing a plethora of these questions – stemming from the need to interpret our legal protections in the new digital era or because states are passing, sometimes similar, sometimes very contradicting bills, on a variety of tech issues.
The two cases that the Supreme Court heard this term (2022-2023), Gonzalez v. Google and Twitter v. Taamneh, both hinge on somewhat similar, very horrible set of facts. However, they address a broader, fundamental question of how we communicate online and how we make sure that all the voices are heard.
The Gonzalez v. Google case involves the family of Nohemi Gonzalez, who was killed in the Paris terrorist attacks in 2015. The Gonzalez family sued YouTube, whose parent company is Google, for algorithmically recommending content that radicalized the extremism responsible for the Paris terrorist attacks. The Gonzalez family claims have changed throughout the litigation, but the end question in front of the court was “is Section 230 protecting platforms’ ability to algorithmically recommended content?”
There was no evidence that algorithmically recommended content on YouTube has radicalized individuals and is linked to the terrorists behind these horrific crimes. There is ISIS content that exists online, no matter how fast you take it down. But there is no direct link between what was happening on YouTube and the Paris terrorist attack in 2015.
In the Twitter v. Taamneh case, the question was if platforms are liable under the Anti-Terrorism Act for having these algorithmic recommendations as it pertains to a platform failing to take down every single piece of terrorism content — even if they intend to.
The set of facts here is also horrible. The family of Nawras Alassaf who died in the Istanbul attacks in 2017 sued Twitter. Similar to the Gonzalez v. Google case, there does not appear to be evidence linking Twitter to the actual attack to find them liable. The Court agreed.
In a 9-0 opinion written by Justice Clarence Thomas, SCOTUS found that Twitter was not liable. Justice Thomas wrote that algorithms are part of the platform’s infrastructure, an assertion that could have significant impact on the progress of technology and free speech. If the pendulum swung the other way, platforms would have to make a difficult call – overly moderate to protect themselves from liability as much as possible or to not use algorithms to rank and show content at all.
Additionally, during oral arguments, to the surprise of many scholars, the Court expressed its concern about the risk of breaking the Internet. Justice Elena Kagan admitted that these are not the nine “greatest experts” on the Internet. And the question is if Section 230 goes away for algorithmic recommendations, how will the Internet look going forward?
The unfortunate truth is that tech companies, especially bigger tech companies but also mid and smaller size, can’t operate with this liability hanging over their head. They’re going to have to adjust the way they serve content and the way they do their business. The consequences for the users would be substantial. Without algorithmic ranking, speech of many categories covering different interests and passions or important social issues — issues that don’t get mainstream media attention — would drown in the noise of the Internet.
Algorithms are used to move up good content to your line-of-sight and move down bad content, Of course, there is content online that’s constitutionally protected, but it’s not good content. I think a term that is often used is “lawful but awful.” And these algorithms are often used to push down the lawful but awful content.
As the oral arguments were progressing in Gonzalez v. Google, some amicus briefs from Stand Together Trust partners were quoted. One of the briefs mentioned, from the Center for Democracy and Technology went into the history of the Internet and showed that algorithms are not just some new thing, that just came around. They did exist when Section 230 was passed into law.
Also, there was an amicus brief by the two people who wrote Section 230, Senator Ron Wyden and Former Representative Chris Cox. They went on the record and said that they had in mind algorithmic recommendations to be protected when they drafted Section 230. That was done to protect free speech online and by protecting free speech online, you’re giving the voice to marginalized groups and groups that are often on the edges of political spectrum, or the groups that are often underrepresented in the policy and political debates and societal debates.
The Court’s decision in favor of Twitter and its call not to issue a decision on the Gonzalez v. Google case are significant wins for proponents of free speech. The decision was not about protecting big tech platforms — the question they decided was how we communicate online and how we make sure that all the voices are heard.
Moving forward, a lot of eyes will be on the Netchoice cases in Florida and Texas. Can you tell us why?
The Supreme Court is currently considering taking up these two cases. If left untouched, the consequences will be seismic. The Florida and Texas laws have different substance, but the idea and the way they work is somewhat similar.
Florida’s SB7072 wants to force platforms to host speech of anyone running for office. Texas HB20 is, as kids say these days, “doing too much.” It designates major social media platforms as “common carriers” and imposes burdensome regulations regarding the individuals and content they can host. The law explicitly forbids platforms from engaging in actions such as removal, demonetization, or blocking of users or content based on “viewpoints.” Platforms that are found to be in violation of this provision would be held liable for each instance of content removal.
The Eleventh Circuit rendered a ruling on May 23, 2022, invalidating a section of the Florida legislation that restricts the ability of social media platforms to moderate and curate content. In contrast, on September 16, 2022, the Fifth Circuit upheld the entirety of Texas’s law.
Now we have what’s called a “circuit split” and on an issue that the Supreme Court showed interest in before. That means the odds of the Justices deciding to hear these cases are high.
Remember that apocalyptic prediction I made? If the Supreme Court doesn’t hear the NetChoice cases social media platforms are going to have to host awful, despicable speech – everything from Nazi content to videos of school shootings to content promoting anorexia and self-harm. Now we have a circuit split and on an issue that the Supreme Court showed interest in before, so the odds of them hearing these cases are high. My hope is that during the oral arguments the Court will once and for all establish that the government does not have the power to decide what we see and say online.
This interview originally appeared at Stand Together Trust on July 11, 2023. To learn more about Stand Together’s Free Speech and Peace efforts, visit here.