...

Facebook’s content policies are easily bypassed by conspiracy theorists’ tactics

Facebook's content policies are easily bypassed by conspiracy theorists' tactics

During the COVID-19 pandemic, social media platforms faced an influx of far-right and anti-vaccination communities spreading dangerous conspiracy theories. These included false claims about vaccines being a form of population control and the virus being a “deep state” plot. Governments and organizations like the World Health Organization had to divert resources from vaccination campaigns to debunk these falsehoods. However, platforms were criticized for not doing enough to stop the spread of misinformation. In response to these concerns, Meta, the parent company of Facebook, made policy announcements aimed at addressing the issue.

One of the techniques Meta employed was shadowbanning, which involved reducing the visibility of misinformation in users’ feeds, search results, and recommendations through algorithmic moderation. They also used fact-checkers to label misinformation. While shadowbanning is seen as a concerning technique due to its lack of transparency, a new study published in the journal Media International Australia examined its effectiveness.

The study analyzed the performance of 18 Australian far-right and anti-vaccination accounts that consistently shared misinformation between January 2019 and July 2021. The researchers also mapped this performance against five content moderation policy announcements made by Meta. The findings revealed two divergent trends. After March 2020, the overall performance of the accounts declined in terms of median performance. However, their mean performance showed increasing levels after October 2020.

The reason for this discrepancy was that while most of the monitored accounts underperformed, a few accounts overperformed and continued to attract new followers even after the alleged policy change in February 2021.

To understand why some accounts thrived despite shadowbanning, the researchers scraped and analyzed comments and user reactions from posts on these accounts. They found that users were highly motivated to engage with problematic content, viewing labeling and shadowbanning as motivating challenges. Users employed tactics like deliberate typos or code words to evade algorithmic detection. They also engaged in conspiracy “seeding” by adding links to archiving sites or less moderated platforms in comments to redistribute content labeled as misinformation and avoid detection.

The study also revealed that platform suppression of content fueled further conspiracies regarding big tech and their alleged complicity with “Big Pharma” and governments. Some users recommended moving sensitive content to alternative platforms, and there were suggestions for moderation-lite sites like Rumble and Twitch.

The researchers concluded that Meta’s suppression techniques, while partially effective in containing the spread of misinformation, did not prevent those invested in sharing and finding misinformation from doing so. They suggested that firmer policies on content removal and user banning would be necessary to address the problem. However, Meta’s previous announcements indicate a lack of appetite for such measures, which could allow the misinformation playground to continue thriving.

Discover more from WIREDGORILLA

Subscribe now to keep reading and get access to the full archive.

Continue reading