...

How can we put a halt to the alarming surge of AI-generated falsehoods driven by algorithms?

How can we put a halt to the alarming surge of AI-generated falsehoods driven by algorithms?

The rise of generative artificial intelligence (AI) tools is exacerbating the problem of misinformation, disinformation, and fake news. Tools like OpenAI’s ChatGPT and Google’s Gemini make it easier to produce content while making it harder to discern what is true and authentic.

Malicious actors can use AI tools to automate the creation of convincing but misleading text, raising concerns about the authenticity of the content we consume online. This has significant implications, as organizations seeking to influence public opinion or elections can now scale their operations with AI, and their content is widely disseminated by search engines and social media.

A recent study in Germany highlighted a trend towards AI-generated content on search engines like Google, Bing, and DuckDuckGo. This shift challenges traditional reliance on editorial control to uphold journalistic standards and verify facts.

NewsGuard, an internet trust organization, identified 725 unreliable websites that publish AI-generated news and information with minimal human oversight. Google also released an experimental AI tool that allows publishers to summarize articles from external websites using generative AI.

The blurring of lines between platforms hosting content and developing generative AI undermines trust in online content. Governments have attempted to address this issue, such as Australia’s amendments to criminal codes and the implementation of a bargaining code that requires platforms to pay for news content. However, these efforts demonstrate the difficulty of taking effective action.

As digital products become integral to businesses and everyday life, they empower platforms, AI companies, and big tech to resist government regulation. Early calls for regulation of generative AI have faded as AI becomes more pervasive.

The rapid pace of change poses a challenge in establishing safeguards against the potential risks of generative AI. The World Economic Forum’s Global Risk Report predicts mis- and disinformation as the greatest threats in the next two years.

To address these challenges, Australia’s eSafety commissioner is working on regulations to mitigate the harm caused by generative AI. “Safety by design” is a key concept that requires tech firms to prioritize safety considerations in their products. The US is ahead in AI regulation, with President Joe Biden’s executive order on the safe deployment of AI requiring companies to share safety test results and implementing red-team testing.

To protect against the risks of generative AI and disinformation, three steps are proposed: clear regulations, media literacy education, and incorporating safety tech into product development strategies.

While users are aware of the rise of AI-generated content, research shows they tend to underestimate their own risk of believing fake news. It should not be the responsibility of users to sift through AI-generated content to find trustworthy information.

Discover more from WIREDGORILLA

Subscribe now to keep reading and get access to the full archive.

Continue reading