The internet is dark and full of terrible sources of information. Sometimes it seems like every day brings a new one to the list. There are the toxic conspiracy theories, deepfakes, and fake news mills. And then there are good old-fashioned internet hoaxes. You know the ones—the sort of fear-mongering, copypasta-esque warnings that came in the form of cryptically bolded email chains a decade ago, and today dot the social media feeds of your friends and relatives.
In comparison to the targeted disinformation campaigns that have dominated headlines in recent years, social media hoaxes seem almost quaint. Aw, we used to get duped by all-caps chain texts that claimed we would end up cursed if we didn’t forward it to five of our friends!
That is, at least, until you realize that they’re somehow still around.
Paris Martineau covers platforms, online influence, and social media manipulation for WIRED.
Like the Momo challenge and most other hoaxes before it, it was nonsensical and easily debunked by a quick Google search—and, of course, quickly went viral anyway.
Make no mistake, it’s not because people are stupid, says Whitney Phillips, a professor at Syracuse University who studies misinformation and how it is amplified online. These sorts of hoaxes have staying power because of the peculiar way people process information and arrive at beliefs. When confronted with new information, humans don’t always do the logical thing and evaluate it on its own merits, Phillips says. Instead, we often make snap decisions based on how the information adheres with our existing worldviews.
If the story pushed by a meme or hoax fits in a way that feels like a coherent narrative to a critical mass of people, it’s game over, says Phillips. She credits this phenomenon with playing a major role in the duping of upwards of a dozen celebrities on Tuesday.
“If you’re a celebrity in particular, you’re going to have some additional sensitivities to the exposure of personal private information, because there’s a market for your personal private information above and beyond the average person’s” (see: the Fappening). “So it makes a lot of sense that these are the kinds of narratives that would resonate with people,” Phillips explains. “That doesn’t mean at all that they’re even the tiniest bit true. But that’s not how belief works.”
The narrative that Big Bad Instagram is going to take all of your most intimate personal data points and use them for nefarious secret purposes is the sort of story that is primed to appeal to the average person, as well, Phillips says, because it contains a kernel of truth: You have all this data out there on the internet, and God knows who has access to it. Chances are, at some point, some part of it will end up being used in a way that you don’t want, and you have no control over it whatsoever.
Of course, it doesn’t help that the very same platforms where so much info is shared are the vectors through which the misinformation designed to co-opt that fear spreads.
“The brain is set up to give us easy answers … so if there’s a hoax that appeals to people’s emotions or intuition, it’s going to trick people, because a lot of people just don’t spend that much time thinking about the things that they see on social media,” says Gordon Pennycook, an assistant professor of behavioral science at the University of Regina in Canada who studies decisionmaking. “Social media is partly to blame, too, because it’s set up to drive engagement, and that engagement often comes at the cost of shutting off people’s brains a little bit.”
Even in cases where users may have an inkling that the content they are choosing to amplify could be inaccurate, the desire to share often wins out, Pennycook says. (See: all of those hoax posts Tuesday with semi-self-aware captions like “Sharing just in case!”)
“Basically, what we find is that accuracy is not the thing that is foremost on people’s minds [when deciding to share something,]” he explains. “The act of sharing something is often performative.”
It is this that gives internet hoaxes their surprising staying power, Phillips says. So long as users’ motivations for sharing aren’t based in the realm of facts, fact-based solutions aren’t a viable remedy.
“When you have a misinformation campaign, it’s not enough to just fact-check something,” Phillips says. “You actually have to tell a different coherent story, so that people can have that cohesion that our brains really want.” She says that even in situations like this week’s—where the stakes of the hoax at hand are so low that it’s quite literally become a joke—misinformation is still misinformation.
“Sure, this is quote-unquote low-level, and not necessarily doing anybody any harm, but the basic process and mechanism is the same as when you’re talking about anti-vaccination misinformation or trans health misinformation,” Phillips says. By ridiculing those who were duped instead of examining why it is that such a broad swath of people were primed to readily accept and amplify this misinformation, she says, we are missing the forest for the trees. And that doesn’t bode well for any of the more serious disinformation challenges yet to come.