The Hidden AI Frontier

This report was authored by Ashwin Acharya, Independent Researcher, and Oscar Delaney, Research Assistant at IAPS.

OpenAI’s GPT-5 launched in early August, after extensive internal testing. But another OpenAI model — one with math skills advanced enough to achieve “gold medal-level performance” on the world’s most prestigious math competition— will not be released for months. This isn’t unusual. Increasingly, AI systems with capabilities considerably ahead of what the public can access remain hidden inside corporate labs.

This hidden frontier represents America’s greatest technological advantage — and a serious, overlooked vulnerability. These internal models are the first to develop dual-use capabilities in areas like cyberoffense and bioweapon design. And they’re increasingly capable of performing the type of research-and-development tasks that go into building the next generation of AI systems — creating a recursive loop where any security failure could cascade through subsequent generations of technology. They’re the crown jewels that adversaries desperately want to steal. This makes their protection vital. Yet the dangers they may pose are invisible to the public, policymakers, and third-party auditors.

While policymakers debate chatbots, deepfakes, and other more visible concerns, the real frontier of AI is unfolding behind closed doors. Therefore, a central pillar of responsible AI strategy must be to enhance transparency into and oversight of these potent, privately held systems while still protecting them from rival AI companies, hackers, and America’s geopolitical adversaries.

Next
Next

Promoting the Stack: Trump’s AI Export Incentive Program Explained