Christopher Covino Christopher Covino

Advancing America’s Cyber Strategy with Differential Access

Advances in AI-enabled cyber capabilities risk giving threat actors an advantage. To advantage defenders, differential access shapes access to cyber-capable models. The U.S. government should leverage these initiatives to advance the White House Cyber Strategy and U.S. national security.

Read More
Oscar Delaney Oscar Delaney

Risk Reporting for Developers’ Internal AI Model Use

Frontier AI companies run their most capable models internally for weeks before public release. This report offers a harmonized reporting standard for internal use risks across SB 53, RAISE, and the EU Code of Practice.

Read More
Theo Bearman Theo Bearman

AI Distillation Attacks: The Case for Targeted Government Intervention

In February 2026, Anthropic, OpenAI, and Google published evidence of systematic campaigns by Chinese AI companies to extract capabilities from American frontier models. This memo examines how distillation attacks work, why there is a case for targeted government intervention and what that might look like. Recommendations are offered to support industry efforts to counter distillation attacks: (1) consider BIS Entity List designations for adversary AI companies conducting distillation attacks; (2) assess the merits of PAIP Act sanctions against those engaging in or facilitating distillation attack; (3) explore the development of a NIST-led AI Distillation Defense Framework for the broader ecosystem.

Read More
Jam Kraprayoon Jam Kraprayoon

Highly Autonomous Cyber-Capable Agents: Anticipating Capabilities, Tactics, and Strategic Implications

Offensive cyber capabilities in frontier AI models are advancing fast. In a matter of months, models have gone from near-zero to meaningful success rates on expert-level security challenges, and leading AI developers have begun triggering their own internal risk thresholds for cybersecurity. Meanwhile, real-world cases have already emerged in which AI agents autonomously executed significant portions of state-sponsored cyber campaigns. These developments raise an increasingly urgent question: what happens when AI systems can plan, execute, and sustain sophisticated cyber operations entirely on their own

Read More
Theo Bearman Theo Bearman

Kimi Claw: Risks from Chinese-Hosted ‘Always On’ AI Agents

Beijing-based, Alibaba-backed AI company Moonshot now offers Kimi Claw - an 'always-on' AI agent embedded in its consumer platform that can access users' files, apps, and communications continuously. Where TikTok collects data from a single app, these agents represent a qualitatively deeper level of data exposure. This memo examines the privacy, cybersecurity, and national security risks, and recommends four low-cost steps the federal government can take now.

Read More
Research Report Cara Labrador Research Report Cara Labrador

Building AI Surge Capacity: Mobilizing Technical Talent into Government for AI-Related National Security Crises

The U.S. government does not currently have enough specialized AI security talent to respond to AI-related national security crises, nor does it have the hiring and clearance mechanisms to surge external experts into short-term service at the speeds a crisis demands. This report sets out how to prepare for that challenge.

Read More
Research Report Jam Kraprayoon Research Report Jam Kraprayoon

AI Agent Governance: A Field Guide

This report is an accessible guide to the emerging field of AI agent governance, including an analysis of the current landscape of agent and their capabilities, novel and enhanced risks posed by more agentic systems, and major open questions and agent interventions.

Read More