Research
The below body of work includes IAPS’ public research reports, responses to government requests for information, blog posts, and more.
Adding location verification features to AI chips could unlock new governance mechanisms for regulators, help enforce existing and future export controls by deterring and catching smuggling attempts, and enable post-sale verification of chip locations. This paper is meant to serve as an initial introduction to location verification use-cases for AI chips with comparison of different methods.
This blog post by Erich Grunewald (IAPS) and Samuel Hammond (the Foundation for American Innovation) argues that Congress should increase the funding of the Bureau of Industry and Security.
This issue brief suggests agenda items for dialogues about advanced AI risks that minimize risk of leaking sensitive information.
This issue brief analyzes key AI-related allocations from the Biden FY2025 Presidential Budget in terms of their potential impact on the responsible development of advanced AI.
Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical information to actors in government, industry, and civil society, these organizations could improve visibility into new and emerging risks posed by frontier systems.
A systematic search for potential case studies relevant to advanced AI regulation in the United States, looking at all federal agencies for factors such as level of expertise, use of risk assessment, and analysis of uncertain phenomena.
This issue brief evaluates the original example of a Responsible Scaling Policy (RSP) – that of Anthropic – against guidance on responsible capability scaling from the UK Department for Science, Innovation and Technology (DSIT).
On this episode of the Federal Drive with Tom Temin, IAPS consultant Onni Aarne discusses how specialized AI chips, and the systems that use them, need protection from theft and misuse. The podcast episode and interview transcript are available on the Federal News Network.
IAPS’s response to a NIST RFI, outlining specific guidelines and practices that could help AI actors better manage and mitigate risks from AI systems, particularly from dual-use foundation models.
Today, the Center for a New American Security (CNAS), in collaboration with the Institute for AI Policy and Strategy, has released a new report, Secure, Governable Chips, by Onni Aarne, Tim Fist, and Caleb Withers.
The report introduces the concept of “on-chip governance,” detailing how security features on AI chips could help mitigate national security risks from the development of broadly capable dual-use AI systems, while protecting user privacy.
This paper examines the Federal Select Agent Program, the linchpin of US biosecurity regulations. It then draws out lessons for AI regulation regarding licensing, regulatory expertise, and the merits of “risk-based” vs. “list-based” systems.
This primer introduces the topic of Chinese AI chip making, relevant to understanding and forecasting China's progress in producing AI chips indigenously.
With this paper, we aim to help actors who support alignment efforts to make these efforts as effective as possible, and to avoid potential adverse effects.
This paper discusses how external scrutiny (such as third-party auditing, red-teaming, and researcher access) can bring public accountability to bear on decisions regarding the development and deployment of frontier AI models.
We link to a working paper which was led by Tim Fist of the Center for a New American Security, and coauthored with IAPS researcher Erich Grunewald. It builds on IAPS's earlier report on AI chip smuggling into China.
Events that bring together international stakeholders to discuss AI safety are a promising way to reduce AI risks. This report recommends ways to make these events a success.
IAPS researchers were interviewed on The Dynamist about compute governance, including AI chip smuggling and recent updates to export controls.
This paper discusses risks from future AI systems and proposes priorities for AI R&D and governance. Its many authors include an IAPS researcher, Turing Prize winners, and a Nobel Memorial Prize winner.
The complex and evolving threat landscape of frontier AI development requires a multi-layered approach to risk management (“defense-in-depth”). By reviewing cybersecurity and AI frameworks, we outline three approaches that can help identify gaps in the management of AI-related risks.
This article was written for the organization 80,000 Hours by an IAPS researcher. It discusses why and how it may be valuable to build expertise in AI hardware and use that expertise to reduce risks and improve governance decisions.
This report examines the prospect of large-scale smuggling of AI chips into China and proposes six interventions for mitigating that.
This paper, led by the Centre for the Governance of AI, evaluates the risks and benefits of open-sourcing, as well as alternative methods for pursuing open-source objectives.
This report describes a toolkit that frontier AI developers can use to respond to risks discovered after deployment of a model. We also provide a framework for AI developers to prepare and implement this toolkit.
We’re excited to introduce the Institute for AI Policy & Strategy (IAPS), a think tank with the mission of reducing risks related to the development & deployment of frontier AI systems.