³Ô¹ÏÍøÕ¾

AI affects everyone – including Indigenous people. It’s time we have a say in how it’s built

Since artificial intelligence (AI) became mainstream over the past two years, many of the risks it poses have been widely documented. As well as fuelling , and , some people believe the emerging technology .

Author


  • Tamika Worrell

    Senior Lecturer in the Department of Critical Indigenous Studies, Macquarie University

But some risks of AI are still poorly understood. These include the very particular risks to Indigenous knowledges and communities.

There’s a simple reason for this: the AI industry and governments have largely ignored Indigenous people in the development and regulation of AI technologies. Put differently, the world of AI is too white.

AI developers and governments need to urgently fix this if they are serious about ensuring everybody shares the benefits of AI. As Aboriginal and Torres Strait Islander people like to say, .

Indigenous concerns

Indigenous peoples around the world are not ignoring AI. They are having conversations, conducting research and sharing their concerns about the current trajectory of it and related technologies.

A problem is the theft of cultural intellectual property. For example, users of AI image generation programs such as can artificially generate artworks in mere seconds which mimic Indigenous styles and stories of art.

This demonstrates how easy it is for someone using AI to misappropriate cultural knowledges. These generations are taken from large data sets of publicly available imagery to create something new. But they miss the storying and cultural knowledge present in our art practices.

AI technologies also fuel the spread of misinformation about Indigenous people.

The internet is already riddled with misinformation about Indigenous people. The long-running Creative Spirits website, which is maintained by a non-Indigenous person, is .

Generative AI systems are likely to make this problem worse. They often around the world. They also draw on inappropriate sources, including Creative Spirits.

During last year’s Voice to Parliament referendum in Australia, “no” campaigners also used depicting Indigenous people. This demonstrates the role of AI in political contexts and the harm it can cause to us.

Another problem is the lack of understanding of AI among Indigenous people. of the Aboriginal and Torres Strait Islander population in Australia don’t know what generative AI is. This reflects an urgent need to provide relevant information and training to Indigenous communities on the use of the technology.

There is also concern about the use of AI in classroom contexts and its .

Looking to the future

Hawaiian and Samoan Scholar says:

We must think more expansively about AI and all the other computational systems in which we find ourselves increasingly enmeshed. We need to expand the operational definition of intelligence used when building these systems to include the full spectrum of behaviour we humans use to make sense of the world.

Key to achieving this is the idea of “Indigenous data sovereignty”. This would mean Indigenous people retain sovereignty over their own data, in the sense that they own and control access to it.

In Australia, a collective known as offers important considerations and principles for data sovereignty and governance. , from creation to infrastructure.

The ³Ô¹ÏÍøÕ¾ Agreement on Closing the Gap of Indigenous data control and access.

This is reaffirmed at a global level as well. In 2020, a group of Indigenous scholars from around the world laying out how Indigenous protocols can inform ethically created AI. This kind of AI would centralise the knowledges of Indigenous peoples.

In a positive step, the Australian government’s highlight the importance of Indigenous data sovereignty.

For example, the guardrails include the need to when it comes to using data about or owned by Aboriginal and Torres Strait Islander people, to “mitigate the perpetuation of existing social inequalities”.

Indigenous Futurisms

, a scholar from a group of North American Indigenous people known as the Anishinaabe, first coined the term “Indigenous Futurisms”.

Ambelin Kwaymullina, an academic and futurist practitioner from the Palyku nation in Western Australia, :

visions of what-could-be that are informed by ancient Aboriginal cultures and by our deep understandings of oppressive systems.

These visions, Kwaymullina writes, are “as diverse as Indigenous peoples ourselves”. They are also unified by “an understanding of reality as living, interconnected whole in which human beings are but one strand of life amongst many, and a non-linear view of time”.

So how can AI technologies be informed by Indigenous ways of knowing?

A first step is for industry to involve Indigenous people in creating, maintaining and evaluating the technologies – rather than asking them retrospectively to approve work already done.

Governments need to also do more than highlight the importance of Indigenous data sovereignty in policy documents. They need to meaningfully consult with Indigenous peoples to regulate the use of these technologies. This consultation must aim to ensure ethical AI behaviour among organisations and everyday users that honours Indigenous worldviews and realities.

AI developers and governments like to claim they are serious about ensuring AI technology benefits all of humanity. But unless they start involving Indigenous people more in developing and regulating the technology, their claims ring hollow.

The Conversation

/Courtesy of The Conversation. View in full .