AI Distillation Attacks: Executive and Congressional Action Can Go Further
This policy memo is a follow-up to AI Distillation Attacks: The Case for Targeted Government Intervention (March 2026).
The White House and Congress have begun acting on AI distillation attacks, but additional steps are needed to match the scale of the threat. First, the April 23 National Security and Technology Memorandum (NSTM) on “Adversarial Distillation of American AI Models” from the White House Office of Science and Technology Policy (OSTP) should be supplemented with further policy addressing the link between distillation attacks and access to advanced semiconductors, as well as specific accountability mechanisms. This is especially timely given that export controls are likely to be on the agenda during President Trump’s visit to China on May 13-15. Second, the proposed Deterring American AI Model Theft Act of 2026 would benefit from targeted amendments on industry cooperation, cross-government information sharing, federal agency access to models from entities of concern, and the relationship between distillation attacks and trade secret theft.
Why AI Distillation Attacks Matter
AI distillation attacks sit at the intersection of economic and national security, representing a direct challenge to the effectiveness of America's export control regime. There is clear evidence that Chinese companies are attempting to distill leading frontier AI models, including models that could have militaryapplications, with illicitly obtained capabilities then cascading through the Chinese and wider open-source AI ecosystem. By circumventing semiconductor export controls, these attacks undermine the billions of dollars in R&D that underpin American AI leadership. Industry defenses, while necessary, cannot fully resolve the coordination challenges exploited by attackers or impose the costs required to alter adversary incentives.
The development of more powerful AI models with national security implications—including Anthropic’s Mythos Preview (and the U.S. Government’sresponsetoit) and OpenAI’s GPT-5.5 model—highlights the urgent need to address AI distillation attacks. As substantiated by the UK AI Security Institute, both Mythos Preview and GPT-5.5 demonstrate rapidly advancing autonomous cyber capabilities that could be used for malicious purposes. In addition, the UK Government now assesses that frontier model capabilities are doubling every four months (versus eight previously) while leading Chinese open-weight models lag the U.S. frontier by about eight months, likely assisted by distillation.
The NSTM
The April 23 NSTM issued by OSTP confirms that the U.S. Government has information about foreign entities, principally based in China, conducting AI distillation attacks in a coordinated and systematic manner. It commits the Administration to information sharing with industry on threat actors and their tactics, developing best practices to detect, address, and defend against distillation attacks, and exploring measures to hold attackers accountable. The NSTM also recognizes the legitimate role that distillation can play in the AI ecosystem, including in the development of open-weight models—consistent with the practices of companies that distribute open models alongside their revenue-generating offerings—and with AI companies that offer compliant distillation services for customer fine-tuning.
While the NSTM is a meaningful first step, it does not address three primary issues central to an adequate response. First, the NSTM does not address the relationship between distillation attacks and semiconductor export controls. The acquisition of advanced semiconductors and associated manufacturing equipment containing U.S.-origin technology allows AI distillation attackers to scale their activity and leverage the outputs they illicitly obtain from frontier U.S. models in the development and deployment of competing ones. DeepSeek’s latest V4 model—released in preview on April 24—was reportedly trained in China on NVIDIA’s most advanced Blackwell chip, and also likely relied on distillation. Second, the NSTM also does not consider the risk of Chinese-linked distillation attackers remotely accessing advanced U.S.-origin semiconductors located outside of China (e.g., by renting cloud-based compute). Such access can support AI distillation attacks and frees up limited domestic compute for other workloads. Third, the NSTM provides no detail on the specific mechanisms the Administration intends to use to hold foreign entities involved in AI distillation attacks accountable, or on plans for enforcement.
To adequately address these gaps, the Administration should develop further policy to account for the risks associated with distillation attacks. This should include:
Semiconductor export controls. The Department of Commerce’s position on exports of advanced AI chips—and U.S. and allied semiconductor manufacturing equipment—to countries of concern should consider the risk that they could be used to support distillation attacks.
Remote access to chips. The Department of Commerce should consider the risk that foreign entities conducting distillation attacks could benefit from remote access to clusters of U.S.-origin semiconductors not physically located in countries of concern.
Specificity of accountability mechanisms. The White House should determine and publicly state the specific authorities (either existing or required from Congress) it intends for executive agencies to use to hold AI distillation attackers and their enablers to account, together with plans for enforcement. Regarding accountability mechanisms, IAPS’ March policy memo recommends considering both additions to the Bureau of Industry and Security (BIS) Entity List and designation under the Protecting American Intellectual Property Act of 2022.
The Deterring American AI Model Theft Act of 2026
The Deterring American AI Model Theft Act of 2026, introduced by Representative Huizenga (R-MI-4) on April 15, was reported out of the House Foreign Affairs Committee 43-0 following an April 22 markup. The bill proposes a statutory framework for identifying, publicizing, and punishing foreign entities involved in AI distillation attacks (termed “Model Extraction Attacks”). It directs the Secretary of Commerce¹, in coordination with the Operating Committee for Export Policy, to assess which entities of concern have conducted model extraction attacks or are fraudulent account network providers helping users in countries of concern bypass geographic access restrictions. The End-User Review Committee would then vote on adding identified entities and their affiliates to the BIS Entity List, and the President would have discretionary authority to impose blocking sanctions under the International Emergency Economic Powers Act. The bill also establishes a public attackers list, a voluntary information-sharing mechanism between closed-source AI model owners and the Department of Commerce, and best-practice guidance on detecting, preventing, and responding to model extraction attacks.
The bill provides a strong substantive statutory framework but has four meaningful gaps that could undermine its intent and effectiveness. First, the bill does not address potential antitrust and data privacy issues that may prevent industry from cooperating to detect, prevent, and respond to model extraction attacks. Second, it does not expressly provide that information related to model extraction attacks and fraudulent account network providers can be disseminated across government, subject to appropriate information handling requirements. Such cross-government information sharing is needed to fulfill the bill's requirements to inform sanctions decisions carried out under s.5 or for other connected purposes. Third, the bill's enforcement architecture also does not explicitly restrict federal agencies from accessing models developed by entities of concern, despite their reliance on model extraction attacks. Fourth, it does not establish that model extraction attacks constitute trade secret theft, limiting potential tools for enforcement and accountability.
Targeted amendments should be made to the bill during its passage through Congress to address these gaps, and other minor technical fixes:
Industry Cooperation. Include an investigation of legal barriers to information sharing related to AI model misuse in the assessments required by s.4 of the bill and, if warranted, recommend policy solutions to address them. Any such solutions will need to be carefully calibrated to address the specific issue of model extraction attacks while preserving competitive dynamics within the industry and appropriate data privacy.
Cross-Government Information Sharing. Include a new provision in s.4(f) to enable the confidential sharing of information related to model extraction attacks and fraudulent account network providers within the government.
Federal Agency Access to Models from Entities of Concern. Subject to a national security and research exemption, include a new subsection under s.5 stating that any AI model developed by an entity of concern may not be accessed—directly or indirectly—by any executive agency. Include relevant definitions of ‘AI model’ and ‘executive agency’ in s.3.
Trade Secret Theft. Include a new subsection under s.5 providing that “entities of concern identified pursuant to subsection (b)(1) or subsection (e) are deemed to have engaged in activity connected to the significant theft of trade secrets,” with a new subsection under s.3 stating that “the term ‘trade secret’ has the meaning given that term in section 1839 of title 18, United States Code.” This would help make sanctions under the Protecting American Intellectual Property Act of 2022 available in response to the NSTM's commitment to “explore a range of measures to hold foreign actors accountable for industrial-scale distillation campaigns.”
Assessment Cadence and Scope Inconsistencies. The assessment cadence (“annually for 3 years”) may be too infrequent and sunset too soon, given the pace of model development. Amend s.4(d) to strike “annually for 3 years” and insert “at least annually for 5 years.” The assessment scope described in s.4(b)(5) is also inconsistent with s.4(b) and the public guidance required at s.4(h). As currently drafted, this could result in the assessment required by section four lacking a statutory requirement to examine prevention and response approaches to distillation attacks that the Department of Commerce would be required to publish best practices on. Strike s.4(b)(5) and insert:
“An examination of the strengths and weaknesses of various approaches that can be used to —
(A) detect model extraction attacks;
(B) determine whether a model extraction attack has occurred or is occurring;
(C) prevent model extraction attacks from occurring; and
(D) respond to model extraction attacks to reduce the incentives to engage in such activity or act as a fraudulent account network provider.”²
Endnotes
The bill’s original text directs the Secretary of State to complete the assessment. Rep. Huizenga’s April 22 technical amendment was adopted to replace all references to the Secretary of State and the Department of State with the Secretary of Commerce and Department of Commerce respectively, save for a reference in page 17, line 10, where the reference to the Secretary of State was struck and not replaced.
In addition, to address minor language inconsistencies across related provisions, the following amendments should be made:
1) Add “respond” to s.4(b)(7) and s.4(b)(8) to align with s.4(h)'s guidance requirement.
2) Amend s.5(a) to use the "entities of concern" language from s.5(b).