Shaun Ee Shaun Ee

Asymmetry by Design: Boosting Cyber Defenders with Differential Access to AI

“Differential access” is a strategy to tilt the cybersecurity balance toward defense by shaping access to advanced AI-powered cyber capabilities. We introduce three possible approaches, Promote Access, Manage Access, and Deny by Default, with one constant across all approaches — even in the most restrictive scenarios, developers should aim to advantage cyber defenders.

Read More
Issue Brief Asher Brass Issue Brief Asher Brass

Location Verification for AI Chips

Adding location verification features to AI chips could unlock new governance mechanisms for regulators, help enforce existing and future export controls by deterring and catching smuggling attempts, and enable post-sale verification of chip locations. This paper is meant to serve as an initial introduction to location verification use-cases for AI chips with comparison of different methods.

Read More
Erich Grunewald Erich Grunewald

Comment on the Bureau of Industry and Security’s Framework for Artificial Intelligence Diffusion

As the administration works towards a strong, streamlined successor to the diffusion rule, we offer recommendations for BIS across three core objectives: (1) Steer the global distribution of American compute to preserve America’s lead in AI; (2) Ensure importing countries—including allies—uphold US export controls or face strict import limits, and use existing technology to address enforcement challenges such as illegal AI chip reexports; and (3) Secure key AI models stored on foreign soil, as model weight theft represents a major potential “compute shortcut” for adversaries.

Read More
Bill Anderson-Samways Bill Anderson-Samways

The US Government’s Role in Advanced AI Development: Predictions and Scenarios

There has been significant recent speculation about whether the US government will lead a future project to build and acquire advanced AI, or continue to play a more arms-length role. We conducted a forecasting workshop on this question, employing the IDEA protocol to elicit predictions from six professional forecasters and five experts on US AI policy.

Read More
Research Report Jam Kraprayoon Research Report Jam Kraprayoon

AI Agent Governance: A Field Guide

This report is an accessible guide to the emerging field of AI agent governance, including an analysis of the current landscape of agent and their capabilities, novel and enhanced risks posed by more agentic systems, and major open questions and agent interventions.

Read More
Institute for AI Policy and Strategy Institute for AI Policy and Strategy

Response to OSTP RFI on AI Action Plan

Our comments focus on ways the US AI Action Plan can build trust in American AI, deny advantages to adversaries, and prepare to adapt as the technology evolves.

Read More
Renan Araujo Renan Araujo

Who should develop which AI evaluations?

This paper, published by the Oxford Martin AI Governance Initiative, explores how to determine which actors are best suited to develop AI model evaluations. IAPS staff Renan Araujo, Oliver Guest, and Joe O’Brien were among the co-authors.

Read More
Oliver Guest Oliver Guest

The Future of the AI Summit Series

This is a link post for a paper which was led by researchers from the Oxford Martin AI Governance Initiative, and on which IAPS researcher Oliver Guest was one of the authors.

Read More
Oliver Guest Oliver Guest

Bridging the Artificial Intelligence Governance Gap: The United States' and China's Divergent Approaches to Governing General-Purpose Artificial Intelligence

A look at U.S. and Chinese policy landscapes reveals differences in how the two countries approach the governance of general-purpose artificial intelligence. Three areas of divergence are notable for policymakers: the focus of domestic AI regulation, key principles of domestic AI regulation, and approaches to implementing international AI governance.

Read More
Commentary Sumaya Nur Adan Commentary Sumaya Nur Adan

Key questions for the International Network of AI Safety Institutes

In this commentary, we explore key questions for the International Network of AI Safety Institutes and suggest ways forward given the upcoming San Francisco convening on November 20-21, 2024. What should the network work on? How should it be structured in terms of membership and central coordination? How should it fit into the international governance landscape?

Read More
Renan Araujo Renan Araujo

Understanding the First Wave of AI Safety Institutes: Characteristics, Functions, and Challenges

AI Safety Institutes (AISIs) are a new institutional model for AI governance that has expanded across the globe. In this primer, we analyze the “first wave” of AISIs: the shared fundamental characteristics and functions of the institutions established by the UK, the US, and Japan that are governmental, technical, with a clear mandate to govern the safety of advanced AI systems.

Read More