Building AI Surge Capacity: Mobilizing Technical Talent into Government for AI-Related National Security Crises
This report was coauthored by Joe O’Brien and Zoe Williams of the Institute for AI Policy and Strategy, Shefali Agrawal and Nandini Shiralkar of Cambridge ERA, and Mackenzie Arnold and Alexandra Jumper of the Institute for Law & AI.
The United States could soon face national security crises in which AI plays a decisive role. Yet the U.S. government does not currently have enough specialized AI security talent to respond to AI-related national security crises, nor does it have the hiring and clearance mechanisms to surge external experts into short-term service at the speeds a crisis demands. In the midst of a crisis, days matter; our hiring system typically takes months.
Building AI Surge Capacity sets out how to prepare for that challenge. The report:
Underscores the urgency of preparation, outlining seven plausible AI-related crises that could emerge in the next two to five years, including AI-enabled cyberattacks on critical infrastructure, model misuse to design biological agents, or failures of deployed defense systems that trigger unintended military escalation.
Outlines the requisite expertise, identifying the specialized technical talent the government would need to mobilize in such crises and providing a Typology of AI Security Talent to guide agencies in assessing and addressing gaps.
Maps where AI security expertise resides, finding that while the federal government has some capable teams, they are often small, overextended, and lack the interdisciplinary depth needed for cross-domain crises. Most relevant expertise remains outside of government, across industry, academia, federally funded research and development centers, nonprofit research organizations, and the independent researcher community.
Assesses existing hiring and clearance pathways, identifying mechanisms that could enable rapid mobilization of external experts, but are slowed by long hiring timelines, clearance barriers, and conflict-of-interest restrictions. These include the Intergovernmental Personnel Act, expert and consultant appointments, and Schedule A authorities.
Together, these findings show that while existing mechanisms could, in principle, be used to bring outside experts into government, they fall short of what would be needed to bolster the government’s in-house expertise at the speed and scale AI-related crises will demand. To address these limitations, the report offers six policy recommendations for strengthening federal surge capacity:
Conduct agency inventories of existing AI security expertise to identify gaps and strengths across the federal workforce.
Establish a National AI Reserve Corps to maintain a pre-cleared, cross-sector pool of technical experts who can be activated during crises.
Make greater use of contract-based surge capacity through prearranged agreements and “zero-hour” contracts for rapid activation.
Expand hiring authorities to access private-sector talent, including creating a new authority or reforming the Intergovernmental Personnel Act.
Reform the security clearance process and requirements by creating expedited pathways and consistent, risk-adjusted standards.
Build the capacity to hire by investing in technical recruiting expertise and upskilling federal hiring teams.
Explore the full set of materials:
Executive Summary: A concise overview of our findings and policy recommendations.
Illustrative AI-Related National Security Crises: Scenarios underscoring the need for surge capacity.
Typology of AI Security Talent: A framework for identifying technical skills critical for crisis response.
Comparative Analysis of Key Hiring Authorities: A comparison of the most promising mechanisms available to bring external talent into short-term government service.