...

Is AI trustworthy enough to write the news? It is already doing so, but not without problems.

Is AI trustworthy enough to write the news? It is already doing so, but not without problems.

The use of artificial intelligence (AI) in generating media content, including news, is on the rise. AI is even being used to create interactive elements associated with news, a practice known as the “gamification” of news. However, this integration of AI into news media is changing the landscape and raising concerns about the integrity of the institution.

An example of the potential issues that can arise from AI in news media was seen in a recent incident involving The Guardian and Microsoft. The Guardian’s article on the death of Lilie James was republished by Microsoft on its news app and website. Alongside the article, an AI-generated poll asked readers to speculate on the cause of death. The poll was intended to keep readers engaged and increase their likelihood of responding to advertisements. The Guardian had no control over the poll, leading to significant reputational damage for the publication.

The incident highlights the problems that can arise when AI is integrated into news pages, which have traditionally been curated by experts. While polls and quizzes can engage readers and are cost-effective to implement using AI, there is a need for human oversight to ensure appropriate content.

Major providers of large language models, such as Open AI, Google, and Meta, have made efforts to prevent their models from generating harmful content. However, these measures are not foolproof and inappropriate content can still be produced.

The accessibility and affordability of generative AI have made it attractive to commercial news businesses. Some companies are using AI to “write” news stories, reducing the need for journalist salaries. News Corp, for example, has a small team that produces thousands of AI-generated articles each week. While this can be an efficient way to generate news content, it also opens the door to potentially misleading information that is indistinguishable from professionally written articles.

As technology advances, so does the risk associated with AI-generated news. Models can now be fine-tuned to respond to specific sources and use recent data, allowing for the creation of news websites using licensed content. While this may be convenient for businesses, it raises concerns about the loss of human oversight and the potential for misinformation and bias.

The use of generative AI in news media could undermine the value of editorially curated news pages. It is important to recognize the limitations of AI and not view it as a replacement for the work of journalists. The News Media Bargaining Code in Australia was designed to address the power imbalance between big tech and media businesses, but the use of generative AI presents a new challenge.

In conclusion, while AI has its benefits in news media, there are risks and challenges that need to be addressed to protect the integrity of the institution. Human oversight and critical thinking are essential to ensure that AI-generated content does not compromise the quality and accuracy of news.

Discover more from WIREDGORILLA

Subscribe now to keep reading and get access to the full archive.

Continue reading