US AI Policy and Model Extraction Risks
On 24 April 2026, Michael Kratsios, the chief science and technology adviser to the U.S. president, issued a formal memorandum declaring that foreign entities, principally based in China, are conducting industrial-scale campaigns to extract capabilities from leading American-developed artificial intelligence systems. The technique described, referred to in the memorandum as “distillation,” involves systematically querying proprietary AI models and using the resulting input-output pairs to train replica systems, effectively transferring the intellectual and strategic value of a frontier model without accessing its underlying weights or training data. The U.S. administration announced it would coordinate directly with American AI companies to build technical defences against these unauthorised extraction efforts.
This development marks a significant shift in how governments and the private sector think about AI risk. The competitive framing that has dominated AI discourse since 2022, focused primarily on benchmark performance and the race to produce the most capable models, is giving way to a new priority: protecting the provenance and integrity of model infrastructure. The U.S. government’s position, as reported by the Associated Press on 24 April 2026, reflects an assessment that the performance gap between U.S. and Chinese frontier models has effectively closed, making the protection of existing intellectual property a more urgent concern than extending a capability lead that may no longer exist.
For professional services firms and organisations that have integrated frontier AI tools into their workflows, this shift has direct operational consequences. The supply chain considerations that once centred on software performance, pricing, and data privacy are now expanding to include defensive security, model provenance verification, and the risk that the AI infrastructure a business relies on may itself be a target of adversarial extraction. These concerns are moving into active enforcement and compliance frameworks at pace.
Key details on adversarial model extraction and the U.S. government response
The technique at the centre of this enforcement action is technically known as adversarial model extraction or model stealing, and it is distinct from legitimate knowledge distillation, a well-established machine learning practice. In legitimate distillation, a smaller “student” model is trained to replicate the behaviour of a larger “teacher” model in a controlled and authorised setting, typically to reduce computational cost while preserving performance. Adversarial extraction involves an unauthorised party systematically querying a deployed proprietary model, collecting large volumes of input-output pairs, and using those pairs to train a functionally equivalent replica without access to the original model weights, architecture details, or training datasets. The Kratsios memorandum uses the term “distillation” to describe this adversarial use case, and practitioners working closely with AI vendors should understand that the regulatory language is applying the term more broadly than its strict technical definition.
The memorandum specifically names China as the principal source of these extraction campaigns and frames the activity as a national security concern rather than a purely commercial intellectual property matter. This framing is significant. It signals that the U.S. government intends to treat unauthorised model extraction with the same seriousness as the theft of defence-relevant technology or sensitive research data. The administration’s stated intention to work with AI companies on technical defences suggests imminent changes to how frontier models are deployed commercially, including likely requirements around inference rate limiting, monitoring for systematic querying behaviour, and restrictions on the volume or nature of API access granted to certain classes of users or geographies.
From a technical architecture standpoint, the defensive measures being discussed in industry circles include input obfuscation, where model responses are subtly modified to degrade the quality of extracted replicas without impairing legitimate use; inference rate limiting to prevent the volume of queries required for effective extraction; and the use of private, air-gapped deployment instances for sensitive organisational use cases. These approaches are analogous to the layered security architectures already applied to sensitive data environments, and their adoption is likely to accelerate significantly as regulatory pressure mounts. For organisations currently using cloud-hosted frontier models via API, the compliance landscape around those deployments is likely to become more demanding within the next 12 to 24 months.
The broader strategic context is also relevant. The Transparency Coalition’s AI legislative update from 24 April 2026 noted that regulatory attention in the United States is intensifying across multiple AI governance dimensions simultaneously, including transparency obligations, supply chain accountability, and now active defensive security requirements. The convergence of these regulatory streams suggests that organisations integrating AI into professional workflows are approaching a period of substantially increased compliance complexity, regardless of whether they are directly involved in AI development.

Australian context: AI sovereignty, supply chain risk, and professional services obligations
Australia does not currently have a direct equivalent to the Kratsios memorandum, but the policy direction it represents is closely aligned with emerging priorities in Australian AI governance. The Australian Government’s 2024 Interim Response to the Safe and Responsible AI consultation, along with the ongoing development of mandatory guardrails for AI in high-risk settings, reflects a similar trajectory toward formalised accountability for AI deployment across professional sectors.
References and related sources
- Primary source: radio.wpsu.org
- whatllm.org
- transparencycoalition.ai
- forbes.com
- global.toyota
How iEnvi can help
iEnvi integrates technology and data-driven approaches into environmental consulting. We monitor AI and technology developments that affect how environmental professionals deliver services to clients.
This is an iEnvi Machete news summary. Prepared by iEnvi to summarise the source article for contaminated land, groundwater, remediation, approvals and site risk professionals.
Published: 24 Apr 2026
Need advice on this topic? Speak to an iEnvi expert at hello@ienvi.com.au or 1300 043 684, or contact us online.
Need advice on this issue? iEnvi provides practical, senior-led environmental consulting across contaminated land, remediation, ecology and environmental risk.
Contaminated land services Remediation services Groundwater services Talk to iEnvi