Jam Kraprayoon Jam Kraprayoon

Highly Autonomous Cyber-Capable Agents: Anticipating Capabilities, Tactics, and Strategic Implications

Offensive cyber capabilities in frontier AI models are advancing fast. In a matter of months, models have gone from near-zero to meaningful success rates on expert-level security challenges, and leading AI developers have begun triggering their own internal risk thresholds for cybersecurity. Meanwhile, real-world cases have already emerged in which AI agents autonomously executed significant portions of state-sponsored cyber campaigns. These developments raise an increasingly urgent question: what happens when AI systems can plan, execute, and sustain sophisticated cyber operations entirely on their own

Read More
Clarissa Koh Clarissa Koh

Takeaways from the India AI Impact Summit

The India AI Impact Summit was the fourth in a series of global AI summits. The stated goal of the Summit was to shift the global AI conversation toward “demonstrable impact”, with a focus on inclusive growth. This included orienting the Summit towards various use cases of AI, such as with panels discussing how to drive adoption in sectors like finance, healthcare, and agriculture.

Read More
Dave Banerjee Dave Banerjee

AI Integrity: Defending Against Backdoors and Secret Loyalties

Frontier AI systems are advancing rapidly and reshaping government operations. As government agencies integrate AI into intelligence analysis, policy research, software development, and military operations, adversaries are increasingly incentivized to compromise these systems. Defending against these threats requires preserving the integrity of AI systems. AI integrity means ensuring AI systems are free from secret or unauthorized modifications that could compromise their behavior.

Read More
Theo Bearman Theo Bearman

Kimi Claw: Risks from Chinese-Hosted ‘Always On’ AI Agents

Beijing-based, Alibaba-backed AI company Moonshot now offers Kimi Claw - an 'always-on' AI agent embedded in its consumer platform that can access users' files, apps, and communications continuously. Where TikTok collects data from a single app, these agents represent a qualitatively deeper level of data exposure. This memo examines the privacy, cybersecurity, and national security risks, and recommends four low-cost steps the federal government can take now.

Read More
Erich Grunewald Erich Grunewald

Issue Brief: The Stop Stealing Our Chips Act

The Stop Stealing Our Chips Act is a bipartisan, bicameral bill introduced in 2025 that would authorize a new Bureau of Industry and Security (BIS) program to strengthen export enforcement by financially rewarding individuals who report export violations to US authorities. This memo explains the bill and offers recommendations to strengthen enforcement.

Read More
Oscar Delaney Oscar Delaney

Strategic Visions in AI Governance: Mapping Pathways to Victory

What AI policy objectives should one work towards? This depends greatly on one’s strategic vision. Strategic visions are high-level views about how to successfully navigate the transition to a world with powerful AI systems. The strategic visions discussed here particularly aim to address three severe risks: takeover by powerful misaligned AI systems, wars resulting from competitive dynamics around AI, and AI-enabled concentration of power among a small group of people.

Read More
Maxwell Roberts Maxwell Roberts

New BIS Licensing Policy for H200s: Tough Guidelines, Weak Enforcement

On January 13, 2026, BIS released a new licensing policy for exports of the Nvidia H200, and similar AI accelerator chips, to China. The licensing policy is the regulatory implementation of the administration’s December 8, 2025 announcement that it would permit H200 sales to China in exchange for a 25% export fee. This memo analyzes and explains the new policy.

Read More
Oscar Delaney Oscar Delaney

Crucial Considerations in ASI Deterrence

A new memo by IAPS Associate Researcher Oscar Delaney reviews the emerging “MAIM” (mutual assured AI malfunction) literature and evaluates the strategic dynamics that could shape ASI deterrence.

Read More
Christopher Covino Christopher Covino

The Emergence of Autonomous Cyber Attacks: Analysis and Implications

In November 2025, Anthropic reported detecting and disrupting one of the first cyber espionage campaign. This appears to be the first publicly known example of AI systems autonomously conducting multi-step attacks against well-defended targets in the wild. This represents a significant step as autonomous offensive AI agents could enable nation-states to conduct continuous operations across multiple targets at an increased tempo, and these autonomous capabilities are likely to proliferate and enable less sophisticated actors to conduct complex operations at faster speeds. This may shift advantages toward attackers until defensive capabilities are deployed at scale.

Read More
Research Report Cara Labrador Research Report Cara Labrador

Building AI Surge Capacity: Mobilizing Technical Talent into Government for AI-Related National Security Crises

The U.S. government does not currently have enough specialized AI security talent to respond to AI-related national security crises, nor does it have the hiring and clearance mechanisms to surge external experts into short-term service at the speeds a crisis demands. This report sets out how to prepare for that challenge.

Read More
Research Report Oscar Delaney Research Report Oscar Delaney

Policy Options for Preserving Chain of Thought Monitorability

The most advanced AI models produce detailed reasoning steps in human language—known as "chain of thought" (CoT)—that provide crucial oversight capabilities for ensuring these systems behave as intended. However, competitive pressures may drive developers toward more efficient but non-monitorable architectures that lack a human-readable CoT. This report presents a framework for determining when coordination mechanisms are needed to preserve CoT monitorability.

Read More
Research Report Erich Grunewald Research Report Erich Grunewald

Accelerating AI Data Center Security

AI systems are advancing at breakneck speed and already reshaping markets, geopolitics, and the priorities of governments. Frontier AI systems are developed and deployed using compute clusters of hundreds of thousands of cutting-edge AI chips housed in specialized data centers. These AI data centers are likely tempting targets for sophisticated adversaries like China and Russia, who may seek to steal intellectual property or sabotage AI systems underpinning military, industry, or critical infrastructure projects.

Read More
Erich Grunewald Erich Grunewald

How AI Chips Are Made

Adapted from a section of a report by Erich Grunewald and Christopher Phenicie, this blog post introduces the core concepts and background information needed to understand the AI chip-making process.

Read More
Blog Post Erich Grunewald Blog Post Erich Grunewald

Compute is a Strategic Resource

Computational power (“compute”) is a strategic resource in the way that oil and steel production capacity were in the past. Like oil, and like steel production capacity, compute is scarce, controllable, concentrated, and highly economically and militarily useful. Just as oil and steel were and remain strategic resources to some extent, compute is now also a strategic resource of very high importance.

Read More
Link Post Oscar Delaney Link Post Oscar Delaney

The Hidden AI Frontier

The most advanced AI systems remain hidden inside corporate labs for months before public release—creating both America's greatest technological advantage and a serious security vulnerability. IAPS researchers identify critical risks and propose lightweight interventions to lessen the threat.

Read More
Renan Araujo Renan Araujo

Verification for International AI Governance

The growing impacts of artificial intelligence (AI) are spurring states to consider international agreements that could help manage this rapidly evolving technology. The political feasibility of such agreements can hinge on their verifiability—the extent to which the states involved can determine whether other states are complying. This report, published by the Oxford Martin School at the University of Oxford analyzes several potential international agreements and ways they could be verified.

Read More
Blog Post Christopher Covino Blog Post Christopher Covino

IAPS Researchers React: The US AI Action Plan

The Trump Administration unveiled its comprehensive AI Action Plan on Wednesday. Experts at the Institute for AI Policy and Strategy reviewed the plan with an eye toward its national security implications. As AI continues to accelerate towards very powerful artificial general intelligence, our researchers discuss promising proposals for addressing critical AGI risks, offer key considerations for government implementation, and explore the plan's gaps and potential solutions.

Read More
Research Report Oscar Delaney Research Report Oscar Delaney

Managing Risks from Internal AI Systems

The most powerful AI systems are used internally for months before they are released to the public. These internal AI systems may possess capabilities significantly ahead of the public frontier, particularly in high-stakes, dual-use areas like AI research, cybersecurity, and biotechnology. To address these escalating risks, this report recommends a combination of technical and policy solutions.

Read More