The Australian government has announced its response to the Safe and Responsible AI in Australia consultation, which received over 500 submissions. Instead of implementing a single AI regulatory law like the EU, the government plans to focus on high-risk areas of AI implementation. This includes discrimination in the workplace, the justice system, surveillance, and self-driving cars. To support the development of these regulations, a temporary expert advisory group will be created. However, this approach raises questions about how high-risk areas will be defined and whether low-risk AI applications should also face regulation. The article suggests that existing principles, guidelines, and regulations can be adapted to address concerns about AI tools, similar to how risk assessment is done for other technologies. The lack of regulatory guardrails for AI tools already in use is a significant problem, and consumers and organizations need guidance on their appropriate adoption. Defining “high risk” settings is challenging as risk is contextual and arises from factors that create potential harm. The article emphasizes the need for diverse membership and expertise in the expert advisory body, including representation from industry, academia, civil society, and the legal profession. A permanent advisory body could manage risks for future technologies and new uses of existing tools. Additionally, more advice and regulation will be needed to address issues like misinformation and transparency in AI-generated content.
Monkey business for the geek in you!