Skip to main content

NTUF Commentary on Texas Social Media Moderation Legislation

Lt. Gov. Dan Patrick
President, Texas State Senate

Written comment for the record with respect to regulation of social media moderation as contemplated by Texas SB 12

I write today on behalf of the National Taxpayers Union Foundation — a nonpartisan research and educational organization that shows Americans how taxes, government spending, and regulations affect them — in order to share our views on the legal climate governing social media content moderation.

As an organization engaged in public policy work from the center-right, we acknowledge and often share the concerns about how these private platforms’ moderation choices affect public dialogue. However, we believe that onerous content moderation restrictions raise several policy and constitutional concerns. As you consider SB 12, we write to share several points for your consideration.

1. Imposing legal restrictions on how and when a private platform may display or remove third-party generated content violates their right to freedom of speech and association under the First Amendment.

Courts have repeatedly upheld private companies’ right to free association in moderating speech on their own platforms against a variety of challenges.[1]

As the late Supreme Court Justice Antonin Scalia wrote in Brown v. Entertainment Merchants Assn., “whatever the challenges of applying the Constitution to ever-advancing technology, ‘the basic principles of freedom of speech and the press, like the First Amendment's command, do not vary’ when a new and different medium for communication appears.”[2]

In contrast to those who object that somehow these large private forums are obliged to uphold a First Amendment standard of content moderation, courts have repeatedly confirmed that the First Amendment applies only to government restrictions on speech. Most recently, Justice Kavanaugh confirmed in 2019 that “By contrast, when a private entity provides a forum for speech, the private entity is not ordinarily constrained by the First Amendment because the private entity is not a state actor. The private entity may thus exercise editorial discretion over the speech and speakers in the forum. … In short, merely hosting speech by others is not a traditional, exclusive public function and does not alone transform private entities into state actors subject to First Amendment constraints.”[3]       

Crucially, these constitutional protections regarding freedom of association on a private platform are not contingent upon their editorial decisions being neutral. This not only applies to what content is left up or deleted, but also to decisions about how content enters users’ feeds, such as post prioritization algorithms.

To quote Berin Szoka and Corbin Barthold of TechFreedom, “The government cannot force a speaker to explain how it decides what to say. The government can no more compel Twitter to explain or justify its decision-making about which content to carry than it could compel Fox News to explain why it books some guests and not others. These are forms of noncommercial speech that turn not on facts, but on opinions.”[4]

While the revised SB 12 at least removes the impossible standard that these companies cannot moderate on the basis of a post’s “viewpoint,” forcing them to individually justify every moderation decision still imposes upon their private discretion and will likely be found unconstitutional.

2. Creating a private right of action to pursue content moderation disputes could expose Texans to more harmful content.

By allowing users a private right of action to seek damages for content moderation decisions, large social media platforms will be forced to make decisions that lawmakers may not have contemplated. The easiest approach for these companies to a law such as SB 12 may in fact be to drastically scale back what content they remove.

Crucially, less moderation is not necessarily good. Social media platforms routinely take down all manner of speech that is perfectly legal but nonetheless objectionable to the vast majority of users, from the violent to the obscene.

The freedom of these platforms to quickly pull down genuinely offensive content is part of what makes them pleasant for most to use. On such massive platforms as Facebook or Twitter, for example, the initial step of content moderation must be done automatically, via algorithms, to be practically achievable. The parameters for what speech these algorithms take down are necessarily subjective and prone to error, but the trade-off is that a majority of the internet’s most unpleasant content never reaches a user’s feed.

Similarly, allowing users to flag subjectively harmful content such as bullying, extremist group recruitment, and spam content is an important part of maintaining a workable online community. Yet these moderating decisions, too, are necessarily context-dependent, subjective, and impossible to keep truly neutral. As much as private companies restricting political speech is a real concern, lawmakers should understand that moderation serves an important purpose in filtering graphic violence, pornography, terrorist recruitment, and other kinds of content that could flourish absent effective tools to shield most users from them. 

3. A state-by-state approach to internet content regulation is fundamentally unworkable.

State attempts to regulate social media, similarly to other state-by-state regulation of internet activity, threatens to create a patchwork of laws relating to a medium whose users interact without regard for lines on a map, which is likely to lead to undesired results.

What if some states mandate that social media platforms increase policing of “hate speech,” however that is defined, whereas others attempt to ban the removal of “political speech,” however that is defined, from platforms entirely? This is not merely a hypothetical; for example, in Colorado, Democratic legislators have proposed a bill[5] that would punish social media platforms for failing to aggressively take down, among other categories, “hate speech,” “conspiracy theories,” and “fake news.”

In the case of creating liability and private rights of action for content moderation, state policies are mostly superceded by federal law under 47 U.S. Code § 230, but even if the federal law were not an obstacle, having to manage content moderation according to a melange of different state regulations will prove functionally impossible, particularly if these guidelines are inconsistent or even competing.

Conclusion

We share the concerns that many lawmakers have expressed about restrictions on political content, but state-level content moderation bills often propose a “solution” that would cause greater disruption and angst for constituents seeking enjoyable social media experiences than the status quo. Legislators should exercise caution and avoid overbearing regulation that could upend the legal foundation underpinning the internet.


[1] Congressional Research Service, “Free Speech and the Regulation of Social Media Content,” March 27, 2019. https://fas.org/sgp/crs/misc/R45650.pdf This report contains a litany of examples of the Supreme Court and lower courts striking down attempts to regulate speech on social media.

[2] Brown, et al. v. Entertainment Merchants Assn. et al., 564 U.S. 786 (2011). Scalia is quoting from Joseph Burstyn, Inc. v. Wilson, 343 U. S. 495, 503 (1952)

[3] Manhattan Community Access Corp. v. Halleck, 587 U.S. ___ (2019)

[4] Corbin Barthold and Berin Szoka, “No, Florida Can’t Regulate Online Speech,” Lawfare Blog, Mar. 12, 2021 https://www.lawfareblog.com/no-florida-cant-regulate-online-speech