IAPS 2025 Year in Review
Dear Friends,
2025 has been an extraordinary year for AI—in technology, policy, and for IAPS.
We started with DeepSeek's R1 shaking up the conversation in January and ended with Gemini 3, GPT-5.2, and Opus 4.5 releasing within weeks of each other. Models became faster, more efficient, and more capable of genuine reasoning. Highly capable Chinese models now dominate open source, driving urgent debates about U.S.-China competition. The policy landscape shifted just as fast, with emphasis moving toward innovation, infrastructure, and diffusion.
Something else shifted: public awareness. Over 50% of Americans now use AI tools—adopting them faster for personal use than for work. Friends and family ask me about the latest models in ways they never did before. And yet only a small proportion are paying attention to what's happening at the frontier.
2025 was supposed to be "the year of the agent." We got mixed delivery. AI can automate software engineering to a very large degree, but still struggles with computer use and isn’t yet fulfilling the personal assistant role some had expected. Nonetheless, it is clear that this time last year AI didn’t even search the web and now agentic AI systems string together complex tasks and operate with increasing autonomy. These capabilities are transformative, and we want this growth to continue. AI has enormous potential to solve hard problems, accelerate discovery, and improve lives.
Progress is spiky. One week brings a breakthrough; the next reveals unexpected limitations. And the same capabilities enabling scientific advances can be misused. We need visibility into what these systems can do. We need institutions prepared for capabilities advancing faster than our understanding. We need protections against misuse and dangerous concentrations of power.
None of this is straightforward. We're navigating a landscape where updates come fast and sometimes contradict each other, and where the same technology can be both beneficial and risky depending on how it's used. There are trade-offs all the way down.
This is what IAPS does. We provide technically grounded research that helps policymakers make sense of emerging capabilities and tradeoffs —what they mean today and what they signal for tomorrow. Below, you'll find highlights from our recent work: research on agentic AI, chain-of-thought monitoring, cybersecurity, and more.
As we head into 2026, the questions are only getting harder. How should oversight be divided between federal, state, and private actors? How do we govern AI in critical infrastructure without clear accountability frameworks? Will rising public concern about jobs and safety reshape the debate? These are the questions we'll be tackling, and we're grateful to have you following along.
Best Wishes,
Jenny Marron
Executive Director, Institute for AI Policy and Strategy
2025 IAPS Highlights
This year, IAPS significantly expanded its research, policy engagement, and public presence. Across teams, IAPS published 8 major reports advancing how policymakers and researchers think about AI risk and governance. In 2025, IAPS researchers:
Introduced “differential access,” a strategy to tilt the cybersecurity balance toward defense by shaping access to advanced AI-powered cyber capabilities. We also identified implications of a recently-reported autonomous cyber campaign.
Created a field guide for AI agent governance, providing an overview of how systems could autonomously achieve goals in the world.
Built a framework for determining when coordination mechanisms are needed to preserve chain of thought monitorability.
Mapped AI security talent gaps and hiring constraints across the federal government and proposed a surge-capacity model to respond to AI-driven national security crises.
Assessed the current state of AI data center security and developed policy solutions to accelerate protections.
Examined China’s AI Safety and Development Association to understand how it navigates domestic challenges and growing geopolitical tensions related to frontier AI risks in China.
Catalogued evidence that substantial quantities of AI chips are being smuggled into China and recommended measures to strengthen export controls.
We contributed to America’s AI Action Plan, with recommendations reflected in provisions on location verification and R&D priorities. We supported the plan’s release through rapid-response analysis and a next-day expert panel.
Our in-person engagement in Washington, DC expanded significantly through targeted briefings, meetings with policymakers, and expert explainers. The IAPS team helped advance real legislative impact, with the Chip Security Act and Stop Stealing our Chips both incorporating IAPS research on location verification and a proposed BIS whistleblower program (respectively).
Beyond DC, IAPS experts participated in major international convenings on AI, such as the Athens Roundtable on AI and the Rule of Law and the International Association for Safe and Ethical AI (IASEAI) Conference in Paris. Our research impact also extended beyond the organization, with IAPS researchers co-authoring international AI governance reports.
IAPS research reached audiences around the world through over 40 media features, highlighting our work on export controls, international governance, and agents, in outlets such as the Economist, Real Clear Politics, the New York Times, and TIME. To close out the year, IAPS's Chief Strategy Officer, Peter Wildeford joined Ronny Chieng on The Daily Show to discuss AGI.
In 2025, we massively expanded both our internal capacity and our broader impact this year, finalizing an AI Policy Fellowship cohort of 29 fellows selected from over 5,600 applicants and welcoming 10 new staff members across research, policy, and operations.
If our work has been valuable to you, please consider donating to support what we do. Your support lets us grow our capacity and take on the research that will shape how AI develops and how it gets governed.