Response to the American Science Acceleration Project RFI
This post contains IAPS’s response to the Request for Information from Senators Heinrich and Rounds as part of the American Science Acceleration Project (ASAP), a national initiative to accelerate the pace of American technical innovation.
A National Center for Advanced AI Reliability and Security
This is a linkpost for a policy memo published by the Federation of American Scientists, which proposes scaling up a significantly enhanced “CAISI+” within the Department of Commerce.
How Some of China’s Top AI Thinkers Built Their Own AI Safety Institute
The emergence of the China AI Safety and Development Association (CnAISDA) is a pivotal moment for China’s frontier AI governance. How it navigates substantial domestic challenges and growing geopolitical tensions will shape conversations on frontier AI risks in China and abroad.
A Whistleblower Incentive Program to Enforce U.S. Export Controls
A Whistleblower Incentive Program to Enforce U.S. Export Controls: "A program modeled on the successful SEC program would help America overcome its export control enforcement woes.”
Countering AI Chip Smuggling Has Become a National Security Priority: An Updated Playbook for Preventing AI Chip Smuggling to the PRC
The Center for a New American Security (CNAS), in collaboration with the Institute for AI Policy and Strategy, has released a new working paper which catalogues evidence that substantial quantities of advanced artificial intelligence (AI) chips are being smuggled into China, undermining U.S. national security.
Accelerating R&D for Critical AI Assurance and Security Technologies
A memo to outline a strategic, coordinated policy approach supporting R&D to address urgent assurance and security challenges relating to frontier AI systems.
Asymmetry by Design: Boosting Cyber Defenders with Differential Access to AI
“Differential access” is a strategy to tilt the cybersecurity balance toward defense by shaping access to advanced AI-powered cyber capabilities. We introduce three possible approaches, Promote Access, Manage Access, and Deny by Default, with one constant across all approaches — even in the most restrictive scenarios, developers should aim to advantage cyber defenders.
Expert Survey: AI Reliability & Security Research Priorities
Our survey of 53 specialists across 105 AI reliability and security research areas identifies the most promising research prospects to guide strategic AI R&D investment.
Location Verification for AI Chips
Adding location verification features to AI chips could unlock new governance mechanisms for regulators, help enforce existing and future export controls by deterring and catching smuggling attempts, and enable post-sale verification of chip locations. This paper is meant to serve as an initial introduction to location verification use-cases for AI chips with comparison of different methods.
Comment on the Bureau of Industry and Security’s Framework for Artificial Intelligence Diffusion
As the administration works towards a strong, streamlined successor to the diffusion rule, we offer recommendations for BIS across three core objectives: (1) Steer the global distribution of American compute to preserve America’s lead in AI; (2) Ensure importing countries—including allies—uphold US export controls or face strict import limits, and use existing technology to address enforcement challenges such as illegal AI chip reexports; and (3) Secure key AI models stored on foreign soil, as model weight theft represents a major potential “compute shortcut” for adversaries.
The US Government’s Role in Advanced AI Development: Predictions and Scenarios
There has been significant recent speculation about whether the US government will lead a future project to build and acquire advanced AI, or continue to play a more arms-length role. We conducted a forecasting workshop on this question, employing the IDEA protocol to elicit predictions from six professional forecasters and five experts on US AI policy.
AI Agent Governance: A Field Guide
This report is an accessible guide to the emerging field of AI agent governance, including an analysis of the current landscape of agent and their capabilities, novel and enhanced risks posed by more agentic systems, and major open questions and agent interventions.
Helping the AI Industry Secure Unreleased Models is a National Security Priority
While attention focuses on publicly available models like ChatGPT, the real risk to U.S. national interests is the theft of unreleased “internal models.” To preserve America’s technological edge, the U.S. government must work with AI developers to secure these internal models.
Response to OSTP RFI on AI Action Plan
Our comments focus on ways the US AI Action Plan can build trust in American AI, deny advantages to adversaries, and prepare to adapt as the technology evolves.
AI Chip Smuggling is the Default, not the Exception
If the US is serious about outcompeting China in AI, it needs to strengthen, not weaken, its AI chip export regime. A crucial first step is eliminating the widespread occurrence of AI chip smuggling.
AI Companies’ Safety Research Leaves Important Gaps. Governments and Philanthropists Should Fill Them.
This is a linkpost for an article written by IAPS researchers Oscar Delaney and Oliver Guest.
AI safety needs Southeast Asia’s expertise and engagement
This is a link post for an article for the Brookings Institution written by IAPS researchers Shaun Ee and Jam Kraprayoon.
Technology to Secure the AI Chip Supply Chain: A Working Paper
This is a linkpost to a piece that Tao Burga, an IAPS fellow, co-authored with researchers from CNAS (Center for a New American Security).
Who should develop which AI evaluations?
This paper, published by the Oxford Martin AI Governance Initiative, explores how to determine which actors are best suited to develop AI model evaluations. IAPS staff Renan Araujo, Oliver Guest, and Joe O’Brien were among the co-authors.