³Ô¹ÏÍøÕ¾

Strengthening clarity and accountability: the first update to interim guidance on generative AI

DTA

The Artificial Intelligence (AI) in Government Taskforce has made the first update to the Interim guidance on agency use of generative AI since its release in July 2023. From the introduction of ‘golden rules’ to clearer principles, read on to find out what changed and why.

The interim guidance has become one of the most accessed resources hosted by the Digital Transformation Agency (DTA) since its launch in July this year. It outlines principles and considerations to support Australian Public Service (APS) agencies and staff using generative AI tools in their work.

Importantly, the guidance was designed to be iterative. This update, undertaken by the AI in Government Taskforce (the Taskforce), responds to user feedback and the evolving tech landscape.

‘With the impact of generative AI more tangible than ever, we’re always looking at how to make the guidance even more practical,’ said Lucy Poole, Acting Chief Executive Officer of the DTA and lead of the Taskforce.

‘These updates, as well as the upcoming Copilot for Microsoft 365 trial, are timely initiatives which will develop greater AI fluency and capability, consistent with the APS Reform agenda.’

While there’s plenty to explore in the updated guidance, here are the major takeaways.

Two golden rules

Generative AI can impact a range and depth of workflows. That’s why APS staff are encouraged to weigh the risks, benefits and impacts on a case-by-case basis.

To make this a bit easier, two new ‘golden rules’ have been added.

  1. You should be able to explain, justify and take ownership of your advice and decisions.
  2. Assume any information you input into public generative AI tools could become public. Don’t input anything that could reveal classified, personal or otherwise sensitive information.

They summarise key aspects of the guidance and align to the and , both of which apply regardless what technology the APS might use.

Let’s consider them in action. If you use ChatGPT to generate insights from data, the second golden rule means you can only use information in your prompts which is appropriate for public consumption, while the first golden rule means you should still be comfortable owning and explaining how you arrived at the insights you decide to use.

In summary – if you were to commit any part of the guidance to memory, are it.

Refreshed, human-oriented principles

Another response to the ‘range and depth’ challenge of generative AI is found in Australia’s AI Ethics Principles, which address artificial intelligence by its broadest definition.

The guidance’s own principles provide some useful interpretation for a generative AI context. This is made clearer by a restructure to reflect the format of Australia’s AI Ethics Principles, aiding easy and consistent application, and the introduction of a new principle – ‘Human, societal and environmental wellbeing’.

Under this principle, APS staff should consider whether using generative AI advances these objectives as well as other implications, such as copyright liabilities and the forthcoming Framework for Indigenous Data and Governance.

Start with your agency policy

While this isn’t a change, it’s important to note.

APS staff should, first and foremost, refer to their own agency’s policies – where they exist – for guidance that is specific to their portfolio’s work and security needs.

Uplifting the APS

Government can and should be an exemplar user of emerging AI technologies, in a safe, ethical and responsible way. The landscape is evolving quickly, with more enterprise offerings competing with public platforms like ChatGPT.

Updates have also been made to the ‘introduction for agencies’, helping them consider how to equip their staff with the right tools in a safe, secure and responsible way.

By providing relevant, timely guidance on how to use generative AI capabilities appropriately, public servants will have the means to build their familiarity with these tools, understand their limitations and identify how they can effectively support better policy and service delivery.

Read the or the

/Public Release. View in full .