³Ô¹ÏÍøÕ¾

Who will write the rules for AI? How nations are racing to regulate artificial intelligence

Artificial intelligence (AI) is a label that can cover a huge range of activities related to machines undertaking tasks with or without human intervention. Our understanding of AI technologies is largely shaped by where we encounter them, from facial recognition tools and chatbots to photo editing software and self-driving cars.

Authors


  • Fan Yang

    Research fellow at Melbourne Law School, the University of Melbourne and the ARC Centre of Excellence for Automated Decision-Making and Society., The University of Melbourne


  • Ausma Bernot

    Postdoctoral Research Fellow, Australian Graduate School of Policing and Security, Charles Sturt University

If you think of AI you might think of tech companies, from existing giants such as Google, Meta, Alibaba and Baidu, to new players such as OpenAI, Anthropic and others. Less visible are the world’s governments, which are shaping the landscape of rules in which AI systems will operate.

Since 2016, tech-savvy regions and nations across Europe, Asia-Pacific and North America have been establishing . (Australia is , still currently investigating the possibility of such rules.)

Currently, there are more than globally. The European Union, China, the United States and the United Kingdom have emerged as pivotal figures in shaping the development and governance of AI in the .

Ramping up AI regulations

AI regulation efforts began to accelerate in April 2021, when the EU proposed an initial framework for regulations called the . These rules aim to set obligations for providers and users, based on various risks associated with different AI technologies.

As the EU AI Act was , China moved forward with proposing its own AI regulations. In Chinese media, policymakers have discussed a desire to be and offer global leadership in both AI development and governance.

Where the EU has taken a comprehensive approach, China has been regulating specific aspects of AI one after another. These have ranged from , to or “deepfake” technology and .

China’s full framework for AI governance will be made up of these policies and others yet to come. The iterative process lets regulators build up their and regulatory capacity, and leaves flexibility to implement new legislation in the face of emerging risks.

A ‘wake-up call’

China’s AI regulation may have been a wake-up call to the US. In April, influential lawmaker his country should “not permit China to lead on innovation or write the rules of the road” for AI.

On October 30 2023, the White House issued an on safe, secure and trustworthy AI. The order attempts to address broader issues of equity and civil rights, while also concentrating on specific applications of technology.

Alongside the dominant actors, countries with growing IT sectors including Japan, Taiwan, Brazil, Italy, Sri Lanka and India have also sought to implement defensive strategies to mitigate potential risks associated with the pervasive integration of AI.

AI regulations worldwide reflect a race against foreign influence. At the geopolitical scale, the US competes with China economically and militarily. The EU emphasises establishing its own and striving for independence from the US.

On a domestic level, these regulations can be seen as favouring large incumbent tech companies over emerging challengers. This is because it is often expensive to comply with legislation, requiring resources smaller companies may lack.

Alphabet, Meta and Tesla have supported calls for . At the same time, the Alphabet-owned has joined Amazon in investing billions in OpenAI’s competitor Anthropic, and Tesla boss Elon Musk’s xAI has just launched its first product, .

Shared vision

The EU’s AI Act, China’s AI regulations, and the White House executive order show shared interests between the nations involved. Together, they set the stage for last week’s ““, in which 28 countries including the US, UK, China, Australia and several EU members pledged cooperation on AI safety.

Countries or regions see AI as a contributor to their economic development, national security, and international leadership. Despite the recognised risks, all jurisdictions are trying to support AI development and innovation.

By 2026, worldwide spending on AI-centric systems may by one estimate. By 2032, according to a Bloomberg report, the generative AI market alone.

Numbers like these, and talk of perceived benefits from tech companies, national governments, and consultancy firms, tend to dominate media coverage of AI. Critical voices are .

Competing interests

Beyond economic benefits, countries also look to AI systems for defence, cybersecurity, and military applications.

At the UK’s AI safety summit, . While China agreed with the Bletchley declaration made on the summit’s first day, it was excluded from public events on the second day.

One point of disagreement is China’s , which operates with little transparency. The EU’s AI Act regards social scoring systems of this sort as creating unacceptable risk.

The US perceives China’s investments in AI as , particularly in terms of cyberattacks and disinformation campaigns.

These tensions are likely to hinder global collaboration on binding AI regulations.

The limitations of current rules

Existing AI regulations also have significant limitations. For instance, there is no clear, common set of definitions of different kinds of AI technology in current regulations across jurisdictions.

Current legal definitions of AI tend to be very broad, raising concern over how practical they are. This broad scope means regulations cover a wide range of systems which present different risks and may deserve different treatments. Many regulations lack clear definitions for risk, safety, transparency, fairness, and non-discrimination, posing challenges for ensuring precise legal compliance.

We are also seeing local jurisdictions launch their own regulations within the national frameworks. These may address specific concerns and help to balance AI regulation and development.

has introduced two bills to regulate AI in employment. has proposed a system for grading, management and supervision of AI development at the municipal level.

However, defining AI technologies narrowly, as China has done, poses a risk that companies will find ways to work around the rules.

Moving forward

Sets of “best practices” for AI governance are emerging from local and national jurisdictions and transnational organisations, with oversight from groups such as the and the US’s ³Ô¹ÏÍøÕ¾ Institute of Standards and Technology. The existing AI governance frameworks from the UK, the US, the EU, and – to a limited extent – China are likely to be seen as guidance.

Global collaboration will be underpinned by both ethical consensus and more importantly national and geopolitical interests.

The Conversation

Les auteurs ne travaillent pas, ne conseillent pas, ne possèdent pas de parts, ne reçoivent pas de fonds d’une organisation qui pourrait tirer profit de cet article, et n’ont déclaré aucune autre affiliation que leur organisme de recherche.

/Courtesy of The Conversation. View in full .