In the past few years, social media companies have faced intense criticism for not taking a more active role in stopping the spread of hate speech and misinformation on their platforms. Meanwhile, the White House thinks those same companies are going too far in their efforts to regulate content and is currently drafting an executive order called, “Protecting Americans from Online Censorship,” which would give the Federal Communications Commission oversight over these decisions. The order seems to be an outgrowth of the social media summit that President Trump held last month, where his 2020 campaign manager Brad Parscale said, “At a time when social media platforms are banning conservative voices and supporters of the president, it’s important for President Trump to emphasize that he appreciates their support and wants to protect their First Amendment rights.”
The executive order hasn’t been released yet. If and when it is, I’m sure there will be plenty of ink spilled over whether it’s constitutional and whether Twitter and Facebook are truly biased against conservatives. But let’s put that aside for a moment. At a time when people across the political spectrum are upset with tech companies, albeit for different reasons, I’d like to posit that the real problem with social media isn’t that it allows hate to spread or that it discriminates against users based on their points of view. It’s that we only become aware that those things are happening when they involve famous or powerful people.
The First Amendment prevents the government from censoring our speech, but it doesn’t apply to private companies. Social media companies actually have their own First Amendment rights and are free to create their own policies that ban whatever kinds of content they want. Most platforms do have rules or community guidelines that ban content that threatens or harasses other users. And in the wake of recent controversies, many platforms have gone further than that. Facebook currently bans hate speech, which it defines as a direct attack on someone because of characteristics like race, ethnicity, national origin, religion, gender, sexual orientation or disease. YouTube bans content that promotes violence or incites hatred against individuals or groups because of these characteristics. Twitter threads the needle a bit, not banning hate speech but banning “hateful conduct,” meaning hateful messages that actually target specific people.
All of these content policies censor speech that would be protected by the First Amendment, but again, these platforms don’t have to comply with the First Amendment. Facebook is a private company and it has no more obligation to host you than a restaurant has to serve a customer who hurls insults at the waiters or refuses to wear shoes. What’s more troubling is how inconsistently enforced these policies are. A restaurant with a strict dress code isn’t necessarily controversial; a restaurant that bans some patrons for not adhering to the dress code, allows others to walk in naked and has an ever-shifting definition of what qualifies as a bowtie is infuriating.
To be fair, the sheer volume of content posted on social media platforms makes it difficult for them to consistently moderate content. They use a combination of algorithms and human moderators to identify posts that violate their policies — and it doesn’t always work that well.
Determining whether a post is hateful depends on context, but algorithms aren’t sophisticated enough yet to look at that — they have to use specific rules to weed out content. In 2017, an investigation by ProPublica into Facebook’s hate speech standards revealed that a post advocating for killing all radicalized Muslims didn’t count as hate speech because it targeted a subgroup of a protected category — radicalized Muslims, not all Muslims. The algorithm was programmed to designate speech attacking a category as a whole as hate speech, not speech targeting a subset.
Meanwhile, many users were banned for uploading screenshots of racist or sexist messages they had received in order to raise awareness of hate speech. This isn’t dissimilar to Twitter’s decision recently to freeze the campaign account of Senate Majority Leader Mitch McConnell for tweeting out a video showing several protesters shouting violent threats against McConnell outside his home. Twitter’s rules ban users from posting content containing threats of violence, even if it’s the target of the threats doing the posting. (Twitter quickly reversed its decision after rampant criticism from Republicans.)
Social media platforms can and do fine-tune their content rules when controversies arise. For instance, Facebook’s policy does allow for hate speech that’s shared with the intention of educating others. Twitter made an exception for McConnell’s tweet because of its “intent to highlight the threats for public discussion.” But case in point, these changes usually occur when celebrities or public figures create negative publicity for the companies. Another way that business interests factor into how content rules are enforced? According to a group of former and current YouTube content moderators, high-profile content creators who draw the most advertising revenue “often get special treatment in the form of looser interpretations of YouTube’s guidelines prohibiting demeaning speech, bullying and other forms of graphic content.”
Which means that while certain people have a disproportionate amount of power to change, challenge and skirt content rules, the average person has none.
About the author: Lata Nott is executive director of the First Amendment Center of the Freedom Forum Institute. Contact her via email at lnott[at]freedomforum.org, or follow her on Twitter at @LataNott.
This article was published by the Freedom Forum Institute.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment