Federal Court Mandates Disclosure for AI-Generated Legal Submissions
On 16 April 2026, the Federal Court of Australia released a formal practice note establishing strict obligations for legal practitioners using generative artificial intelligence in the preparation of court documents. The practice note, issued under the authority of Chief Justice Debra Mortimer, was a direct response to a measurable increase in court filings containing fabricated case citations and hallucinated legal authorities produced by AI tools. Chief Justice Mortimer stated plainly that presenting AI-generated false or inaccurate information to the court is “unacceptable” and fundamentally inconsistent with a lawyer’s duty not to mislead the court. The development represents a significant change: generative AI has moved from an experimental productivity tool to a subject of formal judicial governance in Australian legal proceedings.
The practice note sets out three core obligations for practitioners. First, lawyers must disclose when generative AI has been used in the preparation of any document submitted to the court. Second, practitioners must verify that all cited legal authorities, cases, and propositions exist and are accurate before filing. Third, the court has flagged serious concerns about the input of confidential, suppressed, or private information into AI tools, citing the risk of data leakage through public-facing platforms. Together, these requirements signal that AI-assisted work is no longer a mitigating factor in professional negligence assessments. The practitioner remains personally liable for the accuracy and integrity of every document bearing their name.
For professional services firms operating across legal, consulting, and technical disciplines, this development extends well beyond the courtroom. Environmental consultants, expert witnesses, and technical advisors who prepare documents for judicial and regulatory bodies are directly affected by the principles the court has articulated. The governance expectations set by the Federal Court will, in practice, function as a reference standard against which professional conduct is measured across a range of submission contexts.
Mandatory Disclosure and Verification Requirements for AI Tools
The practice note issued on 16 April 2026 introduces mandatory disclosure requirements that apply at the commencement of documents where AI has been used for summarisation, analysis, or the generation of content including images, audio, or video that may bear on evidence admissibility. This scope is broader than many practitioners may have anticipated. It is not limited to AI-drafted legal arguments. Any AI-assisted component of a submission that could influence the court’s assessment of facts or evidence falls within the disclosure obligation. Firms that have been selectively disclosing AI use, or treating AI-generated summaries of background material as exempt, will need to reassess that approach.
On citation verification, the court’s expectation is explicit: automated citation checkers are not sufficient. The practice note requires manual validation of every legal proposition and case reference. This places a direct obligation on the supervising practitioner to have the subject matter expertise necessary to identify errors before a document is filed. In practical terms, a senior lawyer cannot delegate AI output verification entirely to a junior colleague or a software tool. The duty to verify sits with the person whose name appears on the submission. This is significant in large organisations where AI drafting workflows can involve multiple handoffs before final review.
The confidentiality concern raised in the practice note reflects a well-documented technical risk. Public-facing, consumer-grade AI platforms do not offer the same data governance protections as enterprise-grade, contractually bounded deployments. When practitioners input client information, suppressed details, or commercially sensitive data into these tools, there is a genuine risk that information is retained, logged, or used in model training processes depending on the platform’s terms of service. The Federal Court’s explicit warning on this point confirms that using non-enterprise AI tools for sensitive client matters is now considered a potential breach of professional duty, not merely a data hygiene concern.
The broader context for this practice note is a pattern of AI-related filing problems that has been observed in courts internationally since late 2023, when United States federal judges began sanctioning lawyers for submitting briefs containing fabricated case citations generated by tools such as ChatGPT. The Australian Federal Court’s response in April 2026 reflects a considered regulatory position developed over approximately two and a half years of monitoring this issue. The practice note is not a reactive measure prompted by a single incident. It is a systematic governance response to a structural problem that the legal system has observed accumulating over time.

Compliance Implications for Environmental Consultants and Expert Witnesses
Australia’s legal profession operates under conduct rules administered at the state and territory level through the relevant Legal Services Commissions, with federal court practice governed by the Federal Court of Australia Act 1976 and the court’s own procedural rules. The practice note issued by Chief Justice Mortimer sits within this framework as a formal instrument that practitioners appearing in the Federal Court are required to follow. Non-compliance with a practice note is not merely an administrative irregularity. It can constitute conduct that falls short of the standard expected of an Australian legal practitioner, with potential disciplinary consequences under the relevant Legal Profession Uniform Law or its jurisdictional equivalents.
The implications extend to expert witnesses and technical consultants who prepare reports and evidence for Federal Court proceedings, including environmental consultants submitting expert technical evidence or Environmental Impact Statements in matters before the court. Where AI tools have been used in the preparation of such material, the disclosure and verification obligations established by this practice note apply directly. Environmental professionals operating in this space should review their current AI use policies and ensure that any AI-assisted content in court-bound documents is disclosed, verified, and produced using platforms with appropriate data governance controls.
References and related sources
- Primary source: www.theguardian.com
- coaio.com
- nvidia.com
- marketingprofs.com
- mediapost.com
How iEnvi can help
iEnvi integrates technology and data-driven approaches into environmental consulting. We monitor AI and technology developments that affect how environmental professionals deliver services to clients.
This is an iEnvi Machete news summary. Prepared by iEnvi to summarise the source article for contaminated land, groundwater, remediation, approvals and site risk professionals.
Published: 16 Apr 2026
Need advice on this topic? Speak to an iEnvi expert at info@ienvi.com.au or 1300 043 684, or contact us online.
Need advice on this issue? iEnvironmental Australia provides practical, senior-led environmental consulting across contaminated land, remediation, ecology and environmental risk.
Contaminated land services Remediation services Groundwater services Expert witness services Talk to iEnvi