Australian businesses need more guidance in adopting safe and responsible artificial intelligence practices a new report finds.
The Responsible AI Index 2024, commissioned by the ³Ô¹ÏÍøÕ¾ AI Centre, shows Australian businesses consistently overestimate their capability to employ responsible AI practices.
It found 78 per cent of Australian businesses believe they were implementing AI safely and responsibly but in only 29 per cent of cases was this correct.
The index surveyed 413 executive decision makers who are responsible for AI development in their across financial services, government, health, education, telecommunications, retail, hospitality, utilities, and transport.
Businesses were assessed on 38 identified Responsible AI practices across five dimensions:
- accountability and oversight
- safety and resilience
- fairness
- transparency and explainability
- contestability
The Index found that on average, Australian organisations are adopting only 12 out of 38 of responsible AI practices.
In a bid to ensure organisations are designing and using AI responsibly, the Albanese Government is currently undertaking several initiatives to best-practice in AI.
Today the government has released a Voluntary AI Safety Standard and a Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings.
The Responsible AI Index, produced by fifth quadrant, is available at .
Quotes attributable to the Minister for Industry and Science Ed Husic
“We know AI can be hugely helpful for Australian business, but it needs used safely and responsibly.
“The Albanese government has worked with business to develop standards that help identify and manage risks.
“This is a practical process that they can put into use immediately, so protections will be in place.
“Artificial intelligence is expected to create up to 200,000 AI-related jobs in Australia by 2030 and contribute $170 billion to $600 billion to GDP, so it’s crucial that Australian businesses are equipped to properly develop and use the technology.”