Research and Blog
See IAPS’s body of work across our four focus areas
This issue brief suggests agenda items for dialogues about advanced AI risks that minimize risk of leaking sensitive information.
This issue brief analyzes key AI-related allocations from the Biden FY2025 Presidential Budget in terms of their potential impact on the responsible development of advanced AI.
Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical information to actors in government, industry, and civil society, these organizations could improve visibility into new and emerging risks posed by frontier systems.
A systematic search for potential case studies relevant to advanced AI regulation in the United States, looking at all federal agencies for factors such as level of expertise, use of risk assessment, and analysis of uncertain phenomena.
This issue brief evaluates the original example of a Responsible Scaling Policy (RSP) – that of Anthropic – against guidance on responsible capability scaling from the UK Department for Science, Innovation and Technology (DSIT).
On this episode of the Federal Drive with Tom Temin, IAPS consultant Onni Aarne discusses how specialized AI chips, and the systems that use them, need protection from theft and misuse. The podcast episode and interview transcript are available on the Federal News Network.
IAPS’s response to a NIST RFI, outlining specific guidelines and practices that could help AI actors better manage and mitigate risks from AI systems, particularly from dual-use foundation models.
Today, the Center for a New American Security (CNAS), in collaboration with the Institute for AI Policy and Strategy, has released a new report, Secure, Governable Chips, by Onni Aarne, Tim Fist, and Caleb Withers.
The report introduces the concept of “on-chip governance,” detailing how security features on AI chips could help mitigate national security risks from the development of broadly capable dual-use AI systems, while protecting user privacy.
This paper examines the Federal Select Agent Program, the linchpin of US biosecurity regulations. It then draws out lessons for AI regulation regarding licensing, regulatory expertise, and the merits of “risk-based” vs. “list-based” systems.
This primer introduces the topic of Chinese AI chip making, relevant to understanding and forecasting China's progress in producing AI chips indigenously.