The Race to Regulate Artificial Intelligence: Determining the Authors of AI Rules

The Race to Regulate Artificial Intelligence: Determining the Authors of AI Rules

The field of artificial intelligence (AI) encompasses a wide range of activities involving machines performing tasks with or without human intervention. Our understanding of AI is largely shaped by where we encounter it, from facial recognition tools and chatbots to photo editing software and self-driving cars.

When we think of AI, tech companies like Google, Meta, Alibaba, and Baidu often come to mind. However, governments around the world are also playing a significant role in shaping the rules and regulations surrounding AI systems.

Since 2016, regions and nations across Europe, Asia-Pacific, and North America have been implementing regulations targeting AI technologies. The European Union, China, the United States, and the United Kingdom have emerged as key players in shaping the development and governance of AI on a global scale.

Efforts to regulate AI have gained momentum in recent years. In April 2021, the EU proposed the AI Act, a framework for regulations aimed at setting obligations for providers and users based on the risks associated with different AI technologies. China has also been regulating specific aspects of AI, such as algorithmic recommendations and deepfake technology, in a piecemeal fashion.

China’s proactive approach to AI regulation has prompted the US to take action. In October 2023, the White House issued an executive order on safe, secure, and trustworthy AI, addressing issues of equity, civil rights, and specific applications of the technology.

Countries like Japan, Taiwan, Brazil, Italy, Sri Lanka, and India are also implementing defensive strategies to mitigate potential risks associated with the widespread integration of AI.

The EU’s AI Act, China’s AI regulations, and the White House executive order demonstrate shared interests among nations involved in AI development. These efforts have led to collaborations like the “Bletchley declaration,” where countries pledged cooperation on AI safety.

Despite the recognized risks, all jurisdictions are trying to support AI development and innovation due to its potential economic benefits, contributions to national security, and international leadership.

However, current AI regulations have limitations. There is a lack of clear, common definitions of different AI technologies across jurisdictions, making it challenging to ensure precise legal compliance. Many regulations also lack definitions for risk, safety, transparency, fairness, and non-discrimination.

Local jurisdictions are launching their own regulations within national frameworks to address specific concerns and balance AI regulation and development. However, narrowly defining AI technologies, as China has done, may lead companies to find ways to circumvent the rules.

Moving forward, “best practices” for AI governance are emerging from local and national jurisdictions and transnational organizations. Global collaboration will be driven by ethical consensus and national and geopolitical interests.