The Australian Government’s policy for how it uses AI is now in effect and more work is underway. Read on to find out what steps we’re taking next.
As of 1 September 2024, Australian Public Service (APS) agencies have begun implementing the .
To help agencies comply with the policy, the Digital Transformation Agency (DTA) has published a standard for and another for .
Moving forward, the DTA will provide continued assistance through support for training, AI assurance, capability development and technical standards.
Read on to learn more about how the DTA is working with APS agencies to ensure government serves as an exemplar for the responsible use of AI.
Training APS staff on the fundamentals of AI
It is important for APS agencies to provide staff with the knowledge and support needed to determine if general-use AI tools, including generative AI, are right for their work.
The policy strongly recommends that agencies implement AI fundamentals training for all their staff and, by early October, the DTA will publish an AI fundamentals training module.
This training offers a clear, consistent understanding of how AI works and what to consider when making responsible choices. It will be available for agencies to integrate into their learning platforms, as well as on .
We will also refresh the ‘Guidance for using generative AI’, a simpler rename of the , to be clearer, easier to access and a great companion to the fundamentals training.
Piloting an AI assurance framework
In June this year, the Australian, state and territory governments agreed to the National framework for the assurance of artificial intelligence in government.
It is informed by and aligns each jurisdiction’s own assurance activities so people can expect consistent impact and management of AI.
Through the rest of 2024, the DTA will pilot an AI assurance framework for the Australian Government to test how we will identify, evaluate and manage use cases with elevated risks.
Learnings from the Copilot trial
General-use AI capabilities, such as generative AI, may impact many agencies, their staff and ways of working. It creates new opportunities as well as new risks.
The government has faced this change proactively, including through its cross-APS trial of Copilot for Microsoft 365 and, in the coming months, the DTA will publicly share learnings from the trial.
These findings will be relevant to both agencies considering using generative AI capabilities and vendors looking to offer these capabilities to government.
Sign up to our for updates.
Technical standards for AI
To encourage responsible innovation, DTA is developing AI technical standards, bringing together the APS’ leading AI and machine learning expertise through a cross-government working group, with work continuing through to mid 2025.
The resulting standards will be published under an . This continues our commitment to transparency and encourages both consistency and interoperability between governments, academia and industry as we all contribute to safe and responsible AI in Australia
Evolving with technology and the Australian community
AI technologies are evolving rapidly, so our policies and standards will evolve with these advancements and community expectations.
We will also continue to share the work government is doing, how it is doing it and why. It is our hope that this will build a culture of transparency around our use of AI that builds community trust and confidence.
The Digital Transformation Agency is the Australian Government’s adviser for the development, delivery and monitoring of whole-of-government strategies, policies and standards, for digital and ICT investments and procurement.