Governments worldwide are facing the challenge of effectively managing artificial intelligence (AI). While AI has the potential to boost economies and simplify tasks, it also presents risks such as AI-enabled crime, misinformation, increased surveillance, and discrimination. The European Union has taken a leading role in addressing these risks with the implementation of the Artificial Intelligence Act. This groundbreaking law serves as a model for other countries, including Australia, as they work towards ensuring the safety and benefits of AI for all.

AI is already deeply integrated into society, powering algorithms that recommend music and movies, facial recognition systems, and various services in hiring, education, and healthcare. However, AI is also being misused for purposes such as creating deepfake content, facilitating scams, and violating privacy and human rights. Recent cases, like Clearview AI breaching privacy laws in Australia, highlight the urgent need for better regulation of AI technologies. Even AI developers have called for laws to manage AI risks.

The EU’s Artificial Intelligence Act, which came into force on August 1, is a significant step in the right direction. It establishes requirements for different AI systems based on their level of risk. High-risk systems, such as those used in law enforcement or healthcare, face stricter requirements, while lower-risk systems, like chatbots, have fewer obligations. The act also prohibits certain high-risk systems, including those that manipulate individual decisions using subliminal techniques or unrestricted facial recognition systems.

Other countries are also taking action to regulate AI. The Council of Europe adopted the first international treaty requiring AI to respect human rights, democracy, and the rule of law. Canada is discussing the AI and Data Bill, similar to the EU laws, and the US government has proposed multiple laws addressing different AI systems in various sectors.

In Australia, concerns about AI have prompted the government to initiate public consultations and establish an AI expert group to develop proposed legislation. The government plans to reform laws in healthcare, consumer protection, and creative industries to address AI challenges. The risk-based approach to AI regulation, used by the EU and other countries, is a good starting point. However, a single law cannot address the complexities of AI in specific industries. Specialized laws, such as healthcare laws, will be necessary to address the ethical and legal issues raised by AI in those sectors.

Regulating diverse AI applications across sectors is a complex task, and comprehensive and enforceable laws are still lacking in many countries. Policymakers must collaborate with industry and communities to ensure that AI delivers its promised benefits to society while minimizing harm.

Similar Posts