IAPS Researchers React: The US AI Action Plan

The Trump Administration unveiled its comprehensive AI Action Plan on Wednesday. Experts at the Institute for AI Policy and Strategy reviewed the plan with an eye toward its national security implications. As AI continues to accelerate towards very powerful artificial general intelligence, our researchers discuss promising proposals for addressing critical AGI risks, offer key considerations for government implementation, and explore the plan's gaps and potential solutions.


A Very Promising Start

There is a lot to like about the AI Action Plan. Here’s what our staff are most excited about:

Advancing American Prosperity  

  • I’m happy to see the Administration recognizing the transformative potential of AI and calling it what it is – “an industrial revolution, an information revolution, and a renaissance—all at once”. I’m also excited to see the Administration focused clearly on delivering the benefits of this triple revolution for the American people.

  • This is a plan driven by excitement for the future, and it’s an excitement that I share.  It also acknowledges that positive outcomes must be built on a strong practical foundation that grapples with the energy and infrastructure needs, domestic semiconductor manufacturing, workforce implications, and security throughout. As AI gets integrated across the economy, getting these factors right will be key. 

  • AI has the potential to address real-world problems today. In medicine alone, it could help end the many insidious diseases that plague us. But these benefits must be balanced against the risks of deploying these systems. In this plan, the administration demonstrates awareness of this critical balancing act between risk and reward and the uncertainties surrounding it.

Realizing that the Benefits of AI Require Security 

  • It is promising that the AI Action Plan recognizes American innovation in AI cannot happen without a bedrock of security, like standards for high-security data centers, critical infrastructure cybersecurity, and a trusted AI supply chain. The US faces a formidable cyber threat from China, and it would be a shame to invest enormously in AI only for that investment to be stolen–or worse, sabotaged–by a malicious actor. 

    • Shaun Ee, Policy and Strategy Manager, Frontier Security, IAPS

  • Promising to see the administration promoting secure AI development for critical infrastructure through secure-by-design systems and better information sharing (AI-ISAC). Getting security right from the start is crucial—in traditional software, we've spent years prioritizing features over security, creating today's widespread vulnerabilities. This early focus can help us avoid repeating those mistakes and build trustworthy American AI systems.

  • I’m glad to see the Action Plan’s establishment of an AI Information Sharing and Analysis Center (AI-ISAC) for critical infrastructure sectors, as well as tasking DHS to issue guidance to private sector entities on handling AI-specific vulnerabilities and threats. In our previous work on information-sharing, we flagged the development of a “nervous system” in U.S. government for AI threats as a critical tool for guiding appropriate policy response and crisis response. I’ll be excited to see this incident visibility and response infrastructure implemented.

Securing Critical Defense Systems 

  • A particular strength of the AI Action Plan is its emphasis on ensuring that the Department of Defense’s AI systems are secure and reliable. Given the uniquely high-stakes use cases of these systems, it is vital that they are not vulnerable to unexpected failures or to attacks from rival nation-states. The Plan’s recommendations on high-security data centers for military usage, as well as on advancing DOD AI assurance through a novel proving ground and well-developed AI frameworks, will keep America at the forefront of responsible military AI adoption.

Monitoring Foreign Capabilities

  • The plan recommends that various agencies work to monitor “foreign frontier AI projects that may have national security implications.” This is critical – many policymakers seemed to be completely caught off guard with DeepSeek even though some open sources were commenting on the developer’s significance months before it received widespread attention in the US. We need to avoid similar surprises. With more warning, US leaders will be better able to plan any responses that are needed to foreign AI development.

Leveraging Location Verification to Strengthen Export Controls   

  • The Action Plan includes valuable measures to get BIS better information about where and how smuggling is happening, through collaboration with the intelligence community, and location verification. Location verification for AI chips in particular is an idea that IAPS has championed – my colleague Asher and I wrote the first full report on the idea last year. Understandably, the Action Plan only calls for “exploration” of the idea, but exploration could already mean e.g. working with Nvidia to prove out the idea on their chips in high-risk countries, and let Jensen Huang prove that chip smuggling is not happening. The bipartisan bicameral Chip Security Act shows potential Congressional interest in the idea of location verification as well. However, Congress will also need to approve the President’s request for increased BIS funding, so that BIS will have the enforcement resources to act on that information.


Further Considerations for Implementation 

Of course, the Action Plan is just that – a plan. It’s great to see the administration has set ambitious goals that address everything from AI security research to export control enforcement but success will hinge significantly on effective implementation across these areas.

In the coming months and years, executive agencies will need decisions on how to implement, and will need to do so in an environment of increasingly constrained resources. For example, for AI R&D, what area should be prioritized or what should the technical standards include for high security data centers? Details will determine outcomes, and our experts offer targeted insights on how the administration should execute critical elements of the plan. 

Here’s what our staff are thinking will matter most:

Identifying AI R&D Priorities 

  • The plan aims to accelerate AI interpretability, control, and robustness science through a technology development program at DARPA and inclusion in the forthcoming National AI R&D Strategic Plan (our response here). This is great to see–in the implementation stage, I would also want to see these initiatives take on additional high-priority AI security and reliability research angles (such as work on multi-agent interactions), especially ones that are neglected by industry

Next Steps for Securing Defense and Intelligence Data Centers

  • It’s excellent to see the Action Plan recommend the buildout of high-security data centers for military and intelligence usage. There are three key next steps. First, if the Plan is to achieve its aim of securing data centers against “the most determined and capable nation-state actors”, there is an urgent need for DARPA to fund novel AI security R&D, especially to advance AI-related hardware security and supply chain security. Second, actually building these data centers will necessarily involve private firms, including many nontraditional contractors: the DOD should ensure that all personnel are vetted and monitored. Third, the administration should clarify that contractors deploying AI systems for military or IC usage should do so on these secure data centers: this requirement should extend to training and fine-tuning, which also present vectors for nation-state attacks on these critical U.S. defense assets.

Addressing Federal Talent Gaps

  • Conducting an inventory of DOD’s workforce and establishing AI talent development programs accordingly is a laudable and necessary step toward preparedness. I’d like to see this exercise repeated across government, particularly in national security agencies such as DHS, DOE, and members of the Intelligence Community.

Securing Industry from Nation-States

  • The plan calls for protecting commercial and government innovation from malicious cyber threats and other security risks. This is critical—unreleased internal models and AI research could benefit foreign adversaries if compromised. It will be good to see robust measures here, such as a comprehensive federal threat-sharing program, specialized red-teaming services, and streamlined processes to rapidly declassify relevant intelligence. Clear roles and responsibilities for agencies like CISA and NSA must also be established. Legislation like Advanced AI Security Readiness Act, which directs the NSA to develop a strategy to help secure industry, could help.

Testing and Evaluation infrastructure for DOD

  • The proposal for a DOD AI & Autonomous Systems Virtual Proving Ground and an update to DOD guidance, roadmaps, and toolkits related to AI is a strong step toward effectively deploying advanced AI in national security contexts. The recent announcements of contracts with major AI companies such as OpenAI and Anthropic to acquire frontier AI systems show that the DOD is interested in using such systems for potentially high-stakes use cases. Given that frontier AI systems introduce novel vulnerabilities and failure modes, this “proving ground" should include testbeds and testing and evaluation guidance specifically for frontier AI systems, which currently don’t exist. Ideally, this should involve developing adversarial testing environments that mirror operational reality. There should also be formal channels for third-party experts to supplement vendor-led evaluations and sharpen DOD processes

Improving Federal AI Incident Response Coordination During Significant Events 

  • The incident response actions included in the plan are a strong start. The plan directs various agencies to update incident response doctrine and best practices for both public and private sectors. It also includes actions to update CISA's Incident & Vulnerability Response Playbooks and improve AI vulnerability information sharing. These measures will help government and industry respond to AI-related incidents. However, policy actions should also improve federal incident assessment and coordination for significant AI incidents impacting multiple economic sectors. This could include an AI Incident Unified Coordination Group (AI-UCG) to coordinate federal response to major AI crises, mirroring the Cyber UCG model. The White House could also establish the Rapid Emerging Assessment Council for Threats (REACT) that IAPS recommended in our RFI response.

Questions about the American AI Exports Program

  • Ultimately, the Action Plan leaves open the question of what the Trump administration’s overall strategy will be for export controls: It is unclear if the administration still intends to pursue a more comprehensive replacement for Biden’s diffusion framework, e.g. through replicating the country-level deals they made with the UAE and KSA, and what that will look like. While the Action Plan, and an associated EO, does lay out an American AI Exports Program, this program is not directly connected to export controls – it could, however, evolve into a program for identifying secure, full-stack export packages that could receive expedited licenses to export to countries where risk of diversion is significant, or where American chips would likely be used to the benefit of Chinese companies, if exported on their own.


Gaps in the Plan and Opportunities for Congress 

Of course, no plan is perfect. The AI Action Plan still has several critical gaps that need attention. This is an opportunity for Congress and the Administration to work together to provide departments and agencies with further direction, appropriate resources and authorities, and where needed, passing legislation to address policy gaps.

Here are the key gaps as seen by our staff:

Missing Executive Action on Security? 

  • The Action Plan clearly establishes AI as a national security priority and outlines important actions like bolstering evaluations, investing in security R&D, building high-security data centers, and strengthening intelligence analysis and sharing on adversaries' AI capabilities. These are all positive moves that show the Administration takes the trajectory of AI capabilities seriously. However, the three executive orders that have been released so far focus just on the parts of the plan that promote and accelerate American AI development. The question for me is will the security components - which are foundational to adoption and competitiveness - see EOs as well? If not, what additional hooks or guidance will help agencies turn this plan into reality? In a moment when budgets and staff have been cut across the government (even though all signals are that AI is a priority), agencies will need to make tradeoffs that could make realizing this plan challenging.

Congress Needs to Act on CAISI

  • The Center for AI Standards and Innovation (CAISI) is mentioned 17 times in the AI Action Plan–exciting considering that its continued existence was uncertain until a month and a half ago! But to ensure CAISI can deliver on the national security priorities it is tasked with (such as building national security-related AI evaluations, securing private sector AI innovation, and conducting research into PRC models), it will be important to adequately resource it–right now, CAISI is not codified, though members of Congress have proposed bills to do so, and may continue this year. 

    • Shaun Ee, Policy and Strategy Manager, Frontier Security, IAPS and Joe O’Brien, Researcher, Frontier Security, IAPS

Calls for Greater Industry Transparency

  • While the Action Plan includes meaningful oversight of adversary AI development, it is relatively light on transparency from domestic AI developers. Transparency may be critical for mitigating risks from internal industry models which are ahead of adversary models, as we argue in this new report. A simple measure to address this issue would be AI whistleblower provisions; for example, Congress is currently considering the bipartisan AI Whistleblower Protection Act, which would create legally-protected channels for reporting imminent AI risks.

Concerning Federal Talent Gaps in a Crisis 

  • The Trump administration is right to flag the shortage of federal AI talent as a major obstacle—not just to innovation, but to keeping Americans safe. A talent-exchange program is a good start, but it won’t reach the deep bench of expertise that sits outside government. In a crisis, we can’t wait months for hiring and clearance. Washington needs flexible mechanisms to bring in outside experts quickly—like a reserve corps of pre-cleared AI talent or expanding eligibility under the Intergovernmental Personnel Act to include for-profit companies.

Federal Action is Still Needed to Avoid Patchwork Regulations  

  • The plan makes AI-related discretionary funding to states potentially contingent on favorable state regulatory environments. Avoiding a patchwork of conflicting state AI regulations is a legitimate goal. But limiting state policy should be followed by federal legislation that increases awareness of dangerous AI capabilities, builds rapid response capacity, incentivizes corporate developers to prioritize security without stifling innovation, and guards against risks like model theft and sabotage. What we shouldn’t do is restrict funds to states for stepping in to fill a regulatory vacuum—especially if those funds affect non-AI programs or communities.

International Cooperation may be Needed to Address Shared Threats 

  • The plan focuses heavily on winning a race with foreign adversaries (read: China) for AI dominance. Although the US has compelling national security reasons to want such dominance, this focus downplays the national security risks from a possible shared competitor in future: powerful AI systems operating outside of any human’s control. The two countries might need to agree on shared guardrails to keep such scenarios in check. As an example, they could agree to preserve features of existing AI systems that make them comparably easy to control, such as human-language chain-of-thought

Policy Tensions on Compute and Export Controls 

  • The Action Plan does not state a clear plan for what will and will not be exported to China. The administration’s recent reversal on their stance on the Nvidia H20 was justified based on the idea that it is desirable for Chinese AI companies to be building on the American tech stack, but the Action Plan primarily talks about allies building on it, and notes that “Denying our adversaries access to [advanced AI compute] is a matter of both geostrategic competition and national security.” After existing H20 stocks are sold out, it is unclear which chips will and won’t be approved for sale to China, and the Action Plan suggests there is disagreement on this within the administration. I would urge the administration to take a firm stance against empowering the Chinese AI industry. The primary effect of letting Chinese AI companies build on the American tech stack will be to make it easier for them to gain global market share, without doing much to slow down Huawei and SMIC.

  • It’s great to see the AI Action Plan highlight the need to keep American AI technology out of the hands of our adversaries and call for vigilance on this. However, while the AI Action Plan was being written, over $1B in AI chips was illicitly smuggled into China, as reported by the Financial Times, including the latest most advanced Nvidia GB200s. Unfortunately this is just one small part of a larger problem that we documented in a report with CNAS where we found smuggling to be a key way that the CCP gains compute and builds their own AI advantage. Ongoing smuggling should be a full red alert for this administration and needs to be tackled with the due urgency called for in the plan.

Next
Next

Managing Risks from Internal AI Systems