On February 26, 2024, the U.S. Supreme Court will hear oral arguments in two cases—NetChoice v. Paxton and Moody v. NetChoice—that address whether Florida and Texas can enact laws prohibiting social media platforms from moderating content posted by their users.
The Florida law predominantly limits social media platforms’ ability to “censor”—demonetize, remove, or otherwise restrict—political candidates and certain journalistic outlets. It would also prevent the platforms from moderating harmful mis- and disinformation from several sources, even prohibiting them from attaching labels that guide users to verified information. The Texas law is far broader, preventing most widely used websites, from Facebook and X, formerly known as Twitter, to Etsy and Yelp, from enforcing community standards by prohibiting the removal of nearly any content that’s based on viewpoint. This includes preventing the removal of heinous and objectionable material—Nazi propaganda, deepfakes, socially damaging conspiracy theories, etc.—from any platform unless it falls under specific narrow exceptions, particularly within the narrowly and technically legal definition of being “unlawful.”
These laws, especially the Texas bill, would effectively allow extreme content to thrive on mainstream websites and platforms, while allowing platforms catering to the fringes to remove content at will.
The lower courts’ conflicting rulings
The Supreme Court has long held that corporations, being made up of people, have free speech rights under the First Amendment. Any effort by the government to require or prohibit a private actor to speak must survive strict scrutiny by the courts, which very rarely occurs. These cases involve laws that both compel and prohibit private actors from engaging in speech, as noted above. However, the lower courts split on whether the states’ mandates to social media platforms are permitted.
The U.S. Court of Appeals for the 11th Circuit struck down portions of the Florida law, finding that the law violated the First Amendment’s prohibition on government actors restricting private parties’ speech. The court wrote, “Put simply, with minor exceptions, the government can’t tell a private person or entity what to say or how to say it.” Conversely, the increasingly extreme U.S. Court of Appeals for the 5th Circuit issued a 62-page opinion turning this basic concept on its head. That court upheld the Texas law, allowing the state to prohibit private entities from enforcing their online community standards, including the removal of hate speech, misinformation, oblique calls to violence, and other posts that the private companies deemed inappropriate to platform.
While there are legitimate concerns with the concentration of expression online in a handful of social media platforms and with the desire for states to respond with regulation, that does not justify brazen political attempts to toss out hundreds of years of First Amendment jurisprudence. Further, these cases are coming before the Supreme Court against the backdrop of the Senate’s bipartisan hearings to address the societal damage taking place across the United States due to near-universal under–moderation by social media platforms.
The who, what, and why of the NetChoice cases
Who:
The NetChoice litigants, along with the Computer and Communications Industry Association, represent the broader computing and online industry, not just the social media platforms that are the immediate target of the Texas and Florida laws. The states passed these laws within a broader political landscape in which conservatives have decried social media’s supposed censorship of their voices, despite studies showing conservative posts dominating progressive posts on the platforms in terms of engagement by an almost 10-to-1 margin. As the 11th Circuit noted, the intent of the Florida statute was to coerce and prohibit the speech of social media platforms to, as Gov. Ron DeSantis’ (R-FL) signing statement reads, “combat the ‘biased silencing’ of ‘our freedom of speech as conservatives … by the ‘big tech’ oligarchs in Silicon Valley.” Even some Republican state legislators recognized the First Amendment violations built into these statutes; state Rep. Giovanni Capriglione (TX), for example, asked, “How will a government not use this slippery slope to mandate how other companies and what they can or cannot allow their customers to say or to do, conducting private transactions.”
What:
The Florida law, S.B. 7072, prohibits platforms from deplatforming posts by or about political candidates for office and from censoring or limiting user visibility. These provisions apply not just to social media sites, but also to search engines and even comment sections of websites if they have enough users. The law would prohibit most major websites from being able to police their community standards when those standards apply to candidates as broadly defined by the Florida law. Due to the difficulty of determining which Florida users might be candidates, this statute could have the practical effect of broadly stopping or reducing the removal of hateful, inciting, and nongermane posts. Platforms would also be prohibited from changing their terms of service more than once every 30 days, severely limiting their flexibility to deal with rapidly developing conditions.
The Texas law, H.B. 20, is more expansive, applying to posts by and about users who reside, conduct business, or share or receive content in Texas, not just political candidates for office; it also has a lower threshold for the minimum number of users on websites to which it applies. It prohibits websites and platforms from “censoring” any content except “unlawful expression,” which is an incredibly narrow definition that effectively only includes incitement to violence, true threats, and sometimes obscenity and defamation. Websites will not be able to remove or limit the scope of spam, scams, hate speech, misinformation, harassment, and a host of other harmful communications. The broad ban on content moderation could also prevent social media sites from using their algorithms to provide users with their preferred content because failing to keep all posts on an equal footing could be considered censorship.
These laws, if upheld, will affect the ability of platforms and websites to respond to emerging threats and challenges in the digital space. Some examples include countering misinformation from foreign sources during crucial events such as elections or crisis events such as the Christchurch shooting in New Zealand, where a man radicalized online livestreamed his white supremacist attack and massacre of 50 people at two mosques. The Texas law would require content originating from the United States that is not illegal—even if awful and grotesque—to remain on social media platforms, comment and review pages, and any other forum where public posting is permitted. Under the Florida law, platforms could not deprioritize hateful or grotesque posts without providing the user notice and potentially being subject to significant penalties.
The Florida and Texas laws could also significantly erode safety tools that social media platforms and websites use to keep users, especially minors, safe from distressing material and sensitive content that is still lawful. For instance, YouTube Kids, which is designed for children, could be prohibited from removing offensive content that is nonetheless legal or be required to host political content if it reaches the user thresholds set by the states.
Indeed, the implementation of these laws may be impossible for large existing platforms to operationalize in practice, leaving the possibility that companies may need to resort to more extreme measures. Some companies have implemented the ability to block certain content in certain geographic areas due to restrictions from local law; however, the Florida and Texas laws demand the opposite, that content be left unrestricted. If compliance with these laws is nearly impossible or too high risk, companies might be forced to consider more extreme measures such as blocking access to their sites in those states, a practice normally exercised in the reverse by repressive governments seeking to block access, though the Texas law attempts to prevent that.
Why:
These laws are part of a larger push by right-wing interests to prohibit social media platforms and other websites from engaging in content moderation that will allow societally harmful mis- and disinformation—such as anti-vaccine, COVID-19 hoaxes, deepfakes, and antisemitic and anti-LBGTQ propaganda—to prosper. In congressional hearings, Republican members have been critical of social media platforms for allegedly “censoring” right-wing speech and deplatforming former President Donald Trump in the wake of the January 6 insurrection. As the 11th Circuit noted in its opinion, these state laws were explicit that their intent in compelling the platforms to host speech was based on political motives, not good policy. These laws come at a time when not only are more Americans asking that online mis- and disinformation be more restricted, but the threat from mis- and disinformation to democracy is growing.
Conclusion
Paradoxically, this effort to enforce “free expression” would lead to a landscape where extremist and defamatory content flourishes and mainstream content is pushed to the fringes.
For example, under the Texas law, Facebook and X would be required to host the white nationalist and antisemitic posts that the Tree of Life synagogue shooter shared on their sites from the extreme, far-right social media platform Gab, so long as those posts do not constitute a “true threat” of violence. However, an individual refuting antisemitism or pushing progressive ideals on Gab, or even Truth Social, could legally have their posts removed because the platforms do not meet the threshold user base to apply the Texas law.
Beyond creating this paradox, a Supreme Court ruling upholding these state laws would be earth-shattering in its defiance of the U.S. Constitution. Not only would it reverse centuries of First Amendment jurisprudence, but it also could result in vast economic and social harm. The internet is a huge driver of the economy, and upholding these laws would subject platforms to a difficult, if not impossible, patchwork of state laws to comply with and open them up to innumerable lawsuits. As to societal damage, the provision in the Florida law prohibiting changes to terms of service to once every 30 days would eliminate the agility platforms and websites need to react to unique challenges and rapidly changing threat landscapes to prevent the spread of dangerous content and misinformation. It could also significantly limit the use of safety tools by platforms, such as blurring harmful or sensitive content based on age restrictions; flagging content as sensitive; or adding notes to posts attaching warnings or resources for topics such as eating disorders or self-injury. More concerningly, if the Texas law were permitted to stand, it could effectively establish the entire internet as a haven for extremism—rather than keeping it relegated to niche traffic and the darker corners of the internet.
About the authors: Devon Ombres is the senior director for courts and legal policy at the Center for American Progress. Nicole Alvarez is a senior policy analyst for technology policy at the Center for American Progress.
This article was published by the Center for American Progress.
No comments:
Post a Comment