Mark Zuckerberg made several newsworthy choices this week. One — to invoke Holocaust denial as an example of content that Facebook should keep up because “there are different things that different people get wrong” and “it’s hard to impugn [their] intent” — was ill-advised.
But another — to keep Facebook from diving deeper into the business of censorship — was the right call. On Wednesday, Facebook announced a policy it put in place last month to remove misinformation that contributes to violence, following criticism that content published on the platform has led to attacks against minorities overseas. When pushed to go further and censor all offensive speech, Facebook refused.
While many commentators are focusing legitimate criticism on Zuckerberg’s poor choice of words about Holocaust denial, others are calling for Facebook to adopt a more aggressive takedown policy. What‘s at stake here is the ability of one platform that serves as a forum for the speech of billions of people to use its enormous power to censor speech on the basis of its own determinations of what is true, what is hateful, and what is offensive.
Given Facebook’s nearly unparalleled status as a forum for political speech and debate, it should not take down anything but unlawful speech, like incitement to violence. Otherwise, in attempting to apply more amorphous concepts not already defined in law, Facebook will often get it wrong. Given the enormous amount of speech uploaded every day to Facebook’s platform, attempting to filter out “bad” speech is a nearly impossible task. The use of algorithms and other artificial intelligence to try to deal with the volume is only likely to exacerbate the problem.
If Facebook gives itself broader censorship powers, it will inevitably take down important speech and silence already marginalized voices. We’ve seen this before. Last year, when activists of color and white people posted the exact same content, Facebook moderators censored only the activists of color. When Black women posted screenshots and descriptions of racist abuse, Facebook moderators suspended their accounts or deleted their posts. And when people used Facebook as a tool to document their experiences of police violence, Facebook chose to shut down their livestreams. The ACLU’s own Facebook post about censorship of a public statue was also inappropriately censored by Facebook.
Facebook has shown us that it does a bad job of moderating “hateful” or “offensive” posts, even when its intentions are good. Facebook will do no better at serving as the arbiter of truth versus misinformation, and we should remain wary of its power to deprioritize certain posts or to moderate content in other ways that fall short of censorship.
There is no question that giving the government the power to separate truth from fiction and to censor speech on that basis would be dangerous. If you need confirmation, look no further than President Trump’s preposterous co-optation of the term “fake news.” A private company may not do much better, even if it’s not technically bound by the First Amendment to refrain from censorship.
As odious as certain viewpoints are, Facebook is right to resist calls for further outright censorship. When it comes to gatekeepers of the modern-day public square, we should hope for commitment to free speech principles.
Vera Eidelman is a William J. Brennan Fellow at the ACLU’s Speech, Privacy, and Technology Project.