APS shifts AI strategy from experimentation to governance and accountability

Overview

On 22 April 2026, Lucy Poole, Deputy CEO of the Strategy, Planning and Performance Division at the Australian Digital Transformation Agency (DTA), delivered a keynote address at the 12th Annual Data and Digital Governance Summit. Her address marked a formal and public articulation of where the Australian Public Service stands on artificial intelligence: past the point of experimentation, and now squarely confronting the harder questions of governance, accountability, and institutional alignment. For professionals and businesses operating within or alongside government, this represents a significant shift in how AI deployment will be assessed, procured, and overseen.

Poole’s central argument was that the APS must move beyond treating AI as a tool for accelerating existing workflows. Instead, the focus must shift toward what she described as “imagination, alignment, and how people experience government in practice.” This framing is significant because it signals that the primary measure of AI success in government will no longer be speed or efficiency alone, but whether AI-supported services genuinely improve citizen outcomes and whether the institutional structures surrounding those services can carry the weight of accountability. For consultants, technology vendors, and businesses working on government contracts, this reframing has direct consequences for how they design, document, and deliver AI-integrated solutions.

The timing of this shift is notable. Across federal and state agencies, there has been a rapid uptake of AI tools for document processing, data analysis, policy drafting, and service triage. That experimentation phase has generated valuable learning, but it has also exposed gaps in governance frameworks, raised questions about liability when automated systems produce errors, and created inconsistency across agencies in how AI is deployed and overseen. Poole’s address effectively acknowledges those gaps and signals that the DTA is moving toward a more structured and demanding phase of AI integration across the APS.

Key details

The most technically consequential element of Poole’s address was her focus on agentic AI. Unlike conventional AI tools that retrieve information or generate text in response to a prompt, agentic AI systems are designed to plan and execute sequences of actions, make intermediate decisions, and complete multi-step tasks with limited human intervention at each stage. Poole noted that government agencies must now confront fundamental questions around delegation, accountability, intervention, and public trust as these systems move closer to public-facing services. This is not a hypothetical concern. Several APS agencies are already trialling or deploying agentic systems for tasks ranging from correspondence management to case processing, and the governance frameworks to support those deployments have not kept pace with the technical capability.

A central theme in Poole’s address was the uneven pace of AI adoption across the APS. Agencies face significantly different operational contexts, including legacy systems of varying age and complexity, service obligations that differ in sensitivity and risk profile, and workforces at different stages of digital capability. This means that a single, uniform AI governance framework cannot be applied across the APS without adjustment. Poole acknowledged that businesses and technology vendors should expect agency-by-agency variation in AI maturity, governance appetite, and specific requirements for human-in-the-loop safeguards. For those seeking to supply AI-enabled services to government, this creates a more complex compliance landscape than a blanket national policy would produce.

Poole also issued a direct caution against using AI as a substitute for sound public service design. This warning has practical implications for procurement. It suggests that proposals which rely on AI to paper over process failures or institutional coordination problems are unlikely to receive support under the emerging governance posture. Instead, technology submissions will be expected to demonstrate that AI components serve well-designed, human-centric service models rather than replace the design work itself. This is consistent with broader international trends in responsible AI adoption within public institutions, but Poole’s articulation of it at a governance summit gives it specific weight in the Australian context.

On the question of accountability, Poole’s address raised the issue of legal and ethical responsibility for AI outputs as AI agents move toward handling more complex and consequential tasks. When an AI system shifts from providing information to executing decisions, the chain of accountability becomes more difficult to trace. The DTA’s emerging position is that accountability must remain clearly anchored within human decision-makers and institutional structures, regardless of how much task execution is delegated to automated systems. This will almost certainly inform future updates to APS procurement frameworks, contract conditions, and agency operational guidelines.

APS shifts AI strategy from experimentation to governance and accountability
Image source: Primary source

Australian context: AI governance and APS procurement implications for businesses and consultants

Australia does not yet have a single binding legislative framework governing AI deployment across the public sector, though the regulatory landscape is evolving quickly. The Australian Government’s Interim Voluntary AI Safety Standard, published in late 2024, established ten guardrails for responsible AI use by Australian organisations. While voluntary for the private sector, these guardrails are increasingly influential in shaping APS internal policy and procurement expectations. Poole’s address at the Data and Digital Governance Summit signals that the DTA is preparing to operationalise principles consistent with those guardrails more formally across agencies.

References and related sources

How iEnvi can help

iEnvi integrates technology and data-driven approaches into environmental consulting. We monitor AI and technology developments that affect how environmental professionals deliver services to clients.


This is an iEnvi Machete news summary. Prepared by iEnvi to summarise the source article for contaminated land, groundwater, remediation, approvals and site risk professionals.

Published: 23 Apr 2026

Need advice on this topic? Speak to an iEnvi expert at hello@ienvi.com.au or 1300 043 684, or contact us online.

Need advice on this issue? iEnvi provides practical, senior-led environmental consulting across contaminated land, remediation, ecology and environmental risk.

Contaminated land services Remediation services Groundwater services Talk to iEnvi