Australia’s election regulator has expressed concerns about its ability to address false AI-generated content related to the election process. This remark, made during a senate committee meeting on adopting artificial intelligence (AI), highlights the growing urgency of discussions on the relationship between AI and democracy worldwide.

With over 60 countries holding elections in 2024, including Australia, the use of generative AI tools for creating and consuming information is rapidly changing the landscape. The question arises: how can we ensure the integrity and trust of elections in the era of generative AI?

One of the most significant risks AI poses to democracy is through synthetic content, often referred to as “deepfakes,” which can be used to spread misinformation among voters. Experts have identified “misinformation and disinformation” as the top global risks over the next two years. Instances of deepfakes and misinformation are already occurring, such as a political consultant using a synthetic voice of US President Joe Biden in robocalls and the increasing use of AI-generated videos in India’s election campaign.

However, AI also presents opportunities for democracy. It can facilitate informed civic engagement by simplifying complex policy concepts and enabling automatic translations into multiple languages. Therefore, a comprehensive policy on AI and democracy should consider not only the risks but also the potential benefits.

To ensure a healthy democracy, a broad perspective is necessary. Democracy encompasses more than just free and fair elections; it also includes informed civic engagement, tolerance, political pluralism, responsiveness to public needs, transparency, and accountability. Therefore, when discussing the relationship between AI and democracy, it is essential to address concerns such as political representation, public interest journalism, media literacy, and social cohesion.

While AI introduces novel elements into the democratic ecosystem, it is crucial to learn from past experiences with new communication technologies. Historical examples, such as Gutenberg’s printing press and the rise of social media, provide valuable insights into regulating and controlling information flows. By discerning the genuinely novel aspects of generative AI and drawing on applicable policy tools from previous information technology revolutions, we can navigate the challenges effectively.

Australia stands at a critical juncture, with various policy initiatives in progress ahead of the next federal election. The government is working on responses to AI consultations, considering rules for watermarks in AI-generated content, and reviewing the Online Safety Act 2021 to combat online abuse. To address the AI-democracy relationship comprehensively, the Australian government should develop a coordinated national approach, monitor international elections for policy insights, require politicians to disclose and watermark deepfakes, and ensure sufficient resources for relevant regulatory bodies.

These steps should initiate a broader national conversation on supporting Australia’s democracy to thrive in the age of AI.

Similar Posts