The DTA is exploring artificial intelligence (AI) assurance mechanisms for Australian Government agencies consistent with the ³Ô¹ÏÍøÕ¾ framework for the assurance of AI in government. Our approach to AI assurance prioritises human oversight and the rights, wellbeing, and interests of people and communities.
In September 2024, the DTA began a pilot of a new and .
This draft assurance framework and guidance detail key questions government agencies should answer to assess the impact of their AI use cases against . It is designed to help agencies identify and mitigate AI risks at each stage of the AI lifecycle, and document the steps they are taking to ensure their AI use is responsible.
Our draft assurance framework is intended to complement and strengthen – not replace or duplicate – existing frameworks, legislation, and practices addressing the government’s use of AI. Agencies using this framework must still ensure their AI use case complies with all relevant legislative and policy requirements that would apply to any government activity.
The pilot comprises departments and agencies, with different remits and of varying size, volunteering to trial the draft framework. Each agency is also at different stages of the AI journey.
‘Hearing from these diverse agency perspectives is invaluable for us,’ says Lucy Poole, General Manager of Strategy, Planning and Performance. ‘Their insights will help refine the assurance framework and guidance to ensure they work effectively in different contexts.’
Following the pilot
From November 2024, we will hold participant feedback sessions, interviews, and analyse survey responses to inform updates to the framework and guidance. A complete list of participants will be published on digital.gov.au soon.
Further opportunities for all interested agencies to provide feedback will take place near the end of this year and into early 2025.
Evidence gathered through the pilot will inform the DTA’s recommendations to government on future government AI assurance settings, as part of next steps for the .
‘Our guidance is iterative. It is meant to change and adapt based on the shifting AI landscape within the APS,’ points out Ms Poole. ‘The framework and guidance are subject to amendments based on feedback from pilot participants and other stakeholders.’
‘This draft does not represent a final Australian Government position on AI assurance.’
AI assessments
Under the draft assurance framework, agencies complete an initial threshold assessment covering the basic information of the use case. It emphasises the challenges being addressed and expected benefits an AI solution delivers.
Agencies also need to note potential non-AI alternatives that could deliver similar solutions and benefits.
‘We want agencies to carefully consider viable alternatives,’ explains Ms Poole. ‘For instance, non-AI services could be more cost-effective, secure, or dependable.’
‘Evaluating these options will help agencies understand the advantages and limitations of implementing AI. This enables them to make a better-informed decision on whether to move forward with their planned use case.’
If the assessment contact officer and executive sponsor are satisfied that all risks in the initial assessment are low, then a full assessment is not needed. If one or more risks rate as medium or above, they proceed to a full assessment.
The full assessment asks agencies to document how the use case measures up against . These include, but are not limited to:
- Fairness. Agencies are to reflect on potential biases that may arise in training data where it could be incomplete, unrepresentative, or reflects societal prejudices. AI models may reproduce these, which can generate misleading or unfair outputs, insights, or recommendations. This may disproportionally impact some groups.
- Reliability and safety. Our draft framework and guidance provide suggestions for how agencies should consider ensuring the reliable and safe delivery and use of AI systems. We particularly focus on data suitability, Indigenous data governance, AI model procurement, testing, monitoring, and preparedness to intervene or disengage.
- Privacy protection and security. Privacy protection, data minimisation, and security in AI systems under Australian regulations are vital in approaching the development and roll-out of AI services. These solutions must comply with Australian Privacy Principles, use of privacy-enhancing technologies, and mandatory privacy impact assessments for high-risk projects.
- Transparency and explainability. Our resources highlight the need to consult diverse stakeholders, maintain public visibility, document AI systems, and disclose AI interactions and outputs. It also provides guidance on offering appropriate explanations and maintaining reliable records.
AI in government
This pilot builds on our wider . The policy aims to establish the Australian Government as a leader in the safe, ethical, and responsible use of AI. It seeks to ensure public benefit while fostering trust and addressing potential risks.
‘By recognising and addressing public concern regarding AI, the policy aims to strengthen trust through transparency, accountability, and responsible implementation,’ details Ms Poole. ‘This is achieved through mandatory transparency statements and appointing accountable officials for AI.’
‘Our goal is to provide a unified approach for government agencies to engage with AI confidently. It establishes baseline requirements for governance, assurance, and transparency, removing barriers to adoption and encouraging safe use for public benefit.’
The policy acknowledges the rapidly changing nature of AI technology. It is designed to be flexible and adaptable, undergoing ongoing review and evaluation to remain fit for purpose as the technology and regulatory landscape evolve.