...

How can news media responsibly utilize AI after Nine’s controversial ‘AI editing’ of a Victorian MP’s dress?

How can news media responsibly utilize AI after Nine's controversial 'AI editing' of a Victorian MP's dress?

An article published earlier this week discussed the use of generative AI in image editing and design tools like Photoshop and Canva. The article highlighted a recent incident where Channel Nine altered an image of Victorian MP Georgie Purcell, causing controversy and accusations of sexism. Nine apologized for the edit and attributed it to an AI tool in Adobe Photoshop.

The article explained that generative AI has become more prevalent in these tools, allowing users to generate or augment images based on text prompts. However, Photoshop introduced a new feature called generative fill, which can add content to images without text prompts. This feature, powered by Adobe’s generative AI tool called Firefly, was used by Nine to resize the image of Purcell, resulting in new parts of the image that were not originally there.

The legality of altering someone’s image in this manner depends on jurisdiction and the risk of reputational harm. If an altered image is deemed to cause or have the potential to cause “serious harm,” it may lead to a defamation case.

The article also discussed other uses of generative AI in news organizations, such as creating photorealistic images of current events or using it in place of stock photography. Some news outlets adhere to codes of conduct that require transparency and accuracy in image manipulation.

To ensure responsible use of generative AI, media outlets can implement safeguards such as policies that restrict the use of AI-generated content or only allow it for non-realistic illustrations. Transparency with audiences about the use of AI and editing processes is also important. Adobe’s Content Authenticity Initiative, which includes major media organizations, aims to provide digital history and transparency for AI-generated or augmented content.

News editors are also cautious about potential bias in AI generations due to the unrepresentative data used to train AI models. Additionally, the World Economic Forum has identified AI-fueled misinformation and disinformation as a significant short-term risk.

In light of these risks, it is crucial for individuals to engage in healthy skepticism when consuming online content and to be mindful of the sources they rely on for news and information. This will enable active participation in democracy and reduce the likelihood of falling for scams.

Discover more from WIREDGORILLA

Subscribe now to keep reading and get access to the full archive.

Continue reading