The Facebook founder and CEO continues a hands-off trend, deferring responsibility for the platform’s impacts in our lives.
We are happy to see Mark Zuckerberg become more serious about the influence of Facebook as a platform people use to spread misinformation.
There is a lot of profit being made by a (comparatively) tiny number of people who simply spread whatever “information” gets the most clicks—with no regard or care for the societal repercussions their misinformation causes, or how it impacts the lives and world views of people they affect. It’s great that Facebook aims to take a more serious stand against this exploitative type of content.
That said, when Mr. Zuckerberg says:
“We do not want to be arbiters of truth ourselves, but instead rely on our community and trusted third parties”
…we take pause at the implication. This hands-off stance continues the company’s trend of not wanting to accept any responsibility in the matter, deferring it instead to “community and trusted third parties”.
There are two concerns with this.
First, the truth is not something to be a passive arbiter about. You can’t be “hands-off” about the truth. It’s not something that Facebook, as the platform facilitating people spreading information on, can abstain from being. Its product design and policy decisions will invariably render it an arbiter of truth. In designing features that allow certain people to be present, and certain things to be published on its platform, Facebook is very much an arbiter of truth. When it acts or pretends as if it isn’t, it’s not not an arbiter of truth; it’s simply being a bad one.
By pretending not to be an arbiter of truth, you allow the loudest voices on your platform to be false arbiters of truth on your behalf.
Tolerating hate speech on one’s platform is being an arbiter of truth: on whether hate speech is permitted on the platform. By banning hate speech (which Facebook technically does in policy, but poorly enforces), it establishes that hate speech is an unacceptable form of expressing disagreement or dislike. This is a truth that protects people from hate crimes, by normalizing curtailments on the kinds of violent and aggressive rhetoric that lead to violent expressions. Hate crimes are committed by a much, much smaller subsegment of the group that uses hate speech, but those who do are enabled by an environment that tolerates that kind of speech.
Secondly, the “community and third parties” are notoriously bad at curtailing the impact of misinformation. Snopes et al. are not automatically linked to every article or post with a hyperbolic message, nor could they be. Community reporting often takes a long time before it leads to action, if it even does at all, and essentially requires people to police pages that Facebook’s algorithms specifically don’t show them to begin with.
The projects Mr. Zuckerberg lists as in development to deal with these issues are all important steps we welcome and look forward to seeing deployed. While we encourage Facebook to continue down the path of tackling the problem of fake and willfully deceiving news, and treating it more seriously than before, we also express our hope that the company will recognize the responsibility it has in affecting our lives — not just in the ever-prominent social media platform through which many of us are connected to one another, but also in the cultural narrative it allows to be driven by its own platform.