A recent survey conducted in early 2024 has revealed that Australians are deeply concerned about the risks associated with artificial intelligence (AI) and are calling for stronger government action to ensure its safe development and use.

The survey, which included a nationally representative sample of 1,141 Australians, found that 80% of respondents believe preventing catastrophic risks from advanced AI systems should be a global priority on par with pandemics and nuclear war.

As AI systems become more capable, decisions regarding their development, deployment, and use are becoming increasingly critical. The allure of powerful technology may tempt companies and countries to forge ahead without considering the potential risks.

The survey also highlighted a disconnect between the AI risks that the media and government tend to focus on and the risks that Australians consider most important.

Public concern about AI risks is growing as the development and use of increasingly powerful AI continues to rise. While there are significant potential benefits to AI, such as breakthroughs in biology and medicine, concerns are mounting about our preparedness to handle powerful AI systems that could be misused or behave in unintended and harmful ways.

In response to these concerns, governments around the world are attempting to regulate AI. For example, the European Union has approved a draft AI law, the United Kingdom has established an AI safety institute, and US President Joe Biden recently signed an executive order promoting safer development and governance of advanced AI.

The survey found that Australians want action to prevent dangerous outcomes from AI. The prevention of “dangerous and catastrophic outcomes from AI” was ranked as the top priority for government action. Australians are particularly concerned about AI systems that are unsafe, untrustworthy, and misaligned with human values. Other top worries include the use of AI in cyber attacks and autonomous weapons, AI-related unemployment, and AI failures causing damage to critical infrastructure.

There is strong public support for the establishment of a new government body dedicated to AI regulation and governance, similar to the Therapeutic Goods Administration for medicines. Australians also believe that the country should play a leading role in international efforts to regulate AI development. Additionally, two-thirds of Australians would support a temporary pause on AI development for six months to allow regulators to catch up.

The Australian government published an interim plan in January 2024 to address AI risks, which includes strengthening existing laws and developing voluntary AI safety standards. However, the survey shows that Australians prefer a more safety-focused, regulation-first approach. Suggestions for achieving this include establishing an AI safety lab, a dedicated AI regulator, robust standards and guidelines for responsible AI development, independent auditing of high-risk AI systems, corporate liability and redress for AI harms, increased public investment in AI safety research, and active engagement of the public in shaping the future of AI governance.

Effectively governing AI is a significant challenge for humanity, and Australians are acutely aware of the risks involved. They are calling on the government to address these challenges promptly and prioritize preventing dangerous and catastrophic outcomes over simply bringing the benefits of AI to everyone.

Similar Posts