Skip to main content

A Personalized Algorithm Section 230 Carveout is Bad Policy

Frances Haugen, the Facebook (or Meta) whistleblower, captured the attention of lawmakers and journalists. She testified before Congress and the United Kingdom Parliament, and has served as a catalyst for those looking to rein in “Big Tech.” Haugen’s recommendations range from calling for the resignation of Mark Zuckerberg to policy proposals aimed at changing how the social media company operates. One such suggestion, reforming Section 230 to hold companies liable for content promoted through personalized algorithms, prompted the introduction of legislation from several Democrats. However, algorithms are not the boogeymen some lawmakers believe they are, and carveouts of Section 230 won’t have the effects they intend.

Section 230 of the Communications Decency Act shields online intermediaries from being held liable for user-generated content or from making content moderation decisions on their platforms. Subject to criticisms from the political left and right, Section 230 is an essential part of promoting free speech online and is foundational to the internet as we know it. While a carveout for algorithms may seem like a targeted reform, the ramifications would not be minor.

Four members of the House Committee on Energy and Commerce, Representatives Frank Pallone Jr. (D-NJ), Mike Doyle (D-PA), Jan Schakowsky (D-IL), and Anna Eshoo (D-CA), introduced the Justice Against Malicious Algorithms Act. This bill would remove Section 230 liability protections if an online platform “recklessly” uses personalized algorithms that “materially contributes to a physical or severe emotional injury.” The stated purpose of the bill is to limit the spread of misinformation and extremism. However, targeting algorithms is a clumsy way to address the issue and would not have the desired effect.

The first problem with this bill is a familiar one for Section 230 bills — the First Amendment. Online platforms maintain the First Amendment rights, and carving out Section 230 liability protections won’t change that. It’s important to note that Section 230 is an enhancement of the First Amendment, not a substitute. By allowing companies to seek early dismissal of meritless lawsuits, Section 230 promotes free speech online. As Daniel Lyons of the American Enterprise Institute points out, “editorial control also presumably encompasses subsidiary decisions, such as how to display certain information (even by algorithm), which are the high-tech equivalents of deciding which story goes on the front page and which is buried on page 13.” The First Amendment protects editorial choices for private actors, and this legislation cannot change that.

This is not the first attempted carveout of Section 230 that has surfaced. Lawmakers have floated poking holes in Section 230 to remove liability protections for misinformation, removal of constitutionally protected speech, terrorist or extremist content, bullying, and more. While many Americans might agree that some of these categories of speech should not be hosted online, carveouts of Section 230 are an ineffective way to address the issue. We all likely have an idea of what “misinformation” or “bullying” is, but it is also likely very different from someone else's definition. The contextual nature of speech and subjective judgements about where the line is between “good” or “bad” speech make carveouts an ineffective tool for reining in bad actors and would likely lead to less speech online. It’s also important to note that Section 230 allows companies to address these issues on their platforms through content moderation decisions, and the diverse ways in which platforms address these issues give consumers greater options.

While lawmakers may have a platform, in this case Facebook, or several platforms in mind when creating legislation, there are widespread impacts. The only carveouts to date of Section 230 were the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA) and the Stop Enabling Sex Traffickers Act (SESTA), collectively refered to as FOSTA-SESTA. This legislation was similarly well-intentioned, aimed at curbing illegal sex tracking online and removed Section 230 liability protections for platforms promoting prostitution. While Backpage was the target of this legislation, it led several platforms to remove or restrict services, like Craigslist’s personals. This legislation has been criticized by sex workers for making them less safe and has only led to one charge filed under the law. It would be very difficult to label this as a success.

Following the release of reports from the whistleblower, the Senate Commerce Committee held a hearing featuring executives from Snap, TikTok, and YouTube. In the hearing, Jennifer Stout, Vice President of Global Public Policy at Snap, attempted to make the point that Snap uses algorithms in a much different way than other online platforms. While the mystical “algorithm” may be a convenient boogeyman, it should be noted that not all algorithms are used in the same way or to the same extent. While proponents of the Justice Against Malicious Algorithms Act point to algorithms as promoting misinformation, algorithms also prevent the spread of harmful or dangerous content. There is a lot of user-generated content online, and algorithms can be useful for consumers to sort through content to find what’s relevant to them. Lawmakers’ example of searching for a type of content and then seeing more of that content is also not unexpected. If you ever log into someone else’s Netflix account, you’ll likely see different titles and recommendations based on that user’s taste, which may not be relevant to you. In many cases, algorithms can make platforms with large amounts of content more user friendly.

Unfortunately, this legislation could cause some platforms to abandon the use of algorithms that show users more relevant content, stop the spread of harmful content, and promote “good” content. Fewer algorithms won’t prevent misinformation from being displayed online, and carveouts of Section 230 are a clumsy and ineffective way to address online concerns. Well-intentioned legislation does not always translate into positive results for consumers.