Australia’s eSafety commissioner has issued legal notices to several major tech companies, including Google, Meta, Telegram, WhatsApp, Reddit, and X (formerly Twitter), demanding information on their efforts to protect Australians from online extremism. The companies have 49 days to respond. Governments around the world are increasingly pressuring tech companies to address online harms such as child sexual abuse material and bullying. Combating online extremism presents unique challenges that require regulators to consider research on extremism and terrorism. Extremists use the same online platforms as regular users, making it necessary to balance regulation with the rights of everyday users. Tech companies have implemented initiatives like the Global Internet Forum to Counter Terrorism, but these approaches often fail to capture the full extent of extremist content due to tactics like “swarmcasting,” where extremists use multiple platforms to distribute their content. Filters and moderation policies focused on individual pieces of content are not enough to combat online extremism. Identifying and removing extremist content is complex, as terrorist groups often use non-terrorist activities or borderline content that does not violate policies. Platforms must employ various moderation techniques, but online extremism persists despite these efforts. The lack of a universally accepted definition for terrorism or extremism further complicates the issue. To address these challenges, regulators should expand inquiries beyond major tech players, consider different regulatory approaches for platforms that resist compliance, comply halfheartedly, or struggle to comply, and encourage transparent collaboration between platforms and academia to develop actionable definitions and countermeasures for extremism.

Similar Posts