Today, are digitally excluded. This means they miss out on the social, educational and economic benefits .
In the face of this ongoing “digital divide”, countries are now talking about a future of inclusive artificial intelligence (AI).
However, if we don’t learn from current problems with digital exclusion, it will likely spill over into people’s future experiences with AI. That’s the conclusion from our published in the journal AI and Ethics.
What is the digital divide?
The digital divide is a . People on the wrong side of it face difficulties when it comes to accessing, affording, or using digital services. These disadvantages significantly reduce their quality of life.
have provided us with a rich understanding of who is most at risk. In Australia, older people, those living in remote areas, people on lower incomes and First Nations peoples are most likely to find themselves digitally excluded.
Zooming out, show that one-third of the world’s population – representing the poorest countries – remains offline. Globally, the also still exists: women, particularly in low and middle-income countries, face substantially more barriers to digital connectivity.
During the COVID pandemic, the impacts of digital inequity became much more obvious. As large swathes of the world’s population had to “shelter in place” – unable to go outside, visit shops, or seek face-to-face contact – anyone without digital access was severely at risk.
Consequences ranged from social isolation to reduced employment opportunities, as well as a lack of access to vital health information. that “the digital divide is now a matter of life and death”.
Not just a question of access
As with most forms of exclusion, the digital divide functions in multiple ways. It was originally defined as a gap between those who have access to computers and the internet and those who do not. But research now shows it’s .
Having little or no access leads to reduced familiarity with digital technology, which then erodes confidence, , and ultimately sets in motion “.
As AI tools increasingly reshape our workplaces, classrooms and everyday lives, there is a risk AI could deepen, rather than narrow, the digital divide.
The role of digital confidence
To assess the impact of digital exclusion on people’s experiences with AI, in late 2023 we surveyed a representative selection of hundreds of Australian adults. We began by asking them to rate their confidence with digital technology.
We found digital confidence was lower for women, older people, those with reduced salaries, and those with less digital access.
We then asked these same people to comment on their hopes, fears and expectations of AI. Across the board, the data showed that people’s perceptions, attitudes and experiences with AI were linked to how they felt about digital technology in general.
In other words, the more digitally confident people felt, the more positive they were about AI.
To build truly inclusive AI, these findings are important to consider for several reasons. First, they confirm that digital confidence is not a privilege shared by all.
Second, they show us digital inclusion is about more than just access, or even someone’s digital skills. How confident a person feels in their ability to interact with technology is important too.
Third, they show that if we don’t contend with existing forms of digital exclusion, they are likely to spill over into perceptions, attitudes and experiences with AI.
Currently, in their efforts to reduce the digital divide. So we must make sure the rise of AI doesn’t slow these efforts, or worse still, exacerbate the divide.
What should we hope for AI?
While there , when deployed responsibly, AI can make significant positive impacts on society. Some of these can directly target issues of inclusivity.
For example, computer vision can during a match, making it audible for blind or low-vision spectators.
AI has been used to analyse to help boost employment outcomes in under-represented populations such as First Nations peoples. And, while they’re still in the early stages of development, could increase accessibility and affordability of medical services.
But this responsible AI future can only be delivered if we also address what keeps us digitally divided. To develop and use truly inclusive AI tools, we first have to ensure the feelings of digital exclusion don’t spill over.
This means not only tackling pragmatic issues of access and infrastructure, but also the knock-on effects on people’s levels of engagement, aptitude and confidence with technology.