Takeaways from the India AI Impact Summit

This memo provides an analysis of the key outcomes and geopolitical dynamics at the India AI Impact Summit 2026, held in New Delhi from February 16–21.

The India AI Impact Summit was the fourth in a series of global AI summits following Bletchley Park (2023), Seoul (2024), and Paris (2025). Notably, it was the first hosted by a Global South economy. Delegates from over 100 countries, including more than 20 Heads of State, attended the Summit, alongside AI executives such as Sundar Pichai (Google), Sam Altman (OpenAI), Dario Amodei (Anthropic), and Demis Hassabis (Google DeepMind). 

The stated goal of the Summit was to shift the global AI conversation toward “demonstrable impact”, with a focus on inclusive growth. This included orienting the Summit towards various use cases of AI, such as with panels discussing how to drive adoption in sectors like finance, healthcare, and agriculture.

Key takeaways 

  • Differing US-China priorities for the Summit. The US focused on industrial policy and commercial partnerships, positioning the American AI stack as the route to AI sovereignty while explicitly rejecting global governance of AI. China continued to champion multilateral governance but was noticeably more muted than at Paris, with a smaller delegation and fewer announcements.

  • Contrasting visions of AI sovereignty. AI sovereignty featured prominently but definitional ambiguity masked substantive differences. India framed sovereignty as indigenous capability, France as strategic autonomy from concentrated technological power, and the US as adoption of the American AI stack under local control.

  • Continued official shift from frontier AI topics. AI executives spoke about short timelines to AGI, but the New Delhi Declaration omitted any reference to frontier AI, and the voluntary commitments made by AI companies were much less focused on severe risks associated with frontier AI than the comparable commitments from Seoul.

US and China 

The India Summit provided insights into the strategic priorities of the world's two leading AI powers. The US prioritized AI adoption, both domestically and internationally by seeking to become the partner of choice in enabling AI sovereignty. China continued to emphasize the importance of deepening cooperation on global AI governance, though less prominently than in Paris.

United States

The US signaled a shift from diplomatic engagement to industrial policy and commercial partnerships. Office of Science and Technology Policy (OSTP) Director Michael Kratsios, who led the US delegation this year, delivered a targeted, pro-industry message. In his speech, he articulated key aspects of the Trump Administration's AI agenda and stressed the importance of minimizing bureaucracies and centralized control to drive AI adoption. This echoed Vice President J.D. Vance’s remarks at the previous summit, in Paris.

Kratsios identified trust and regulatory certainty and clarity as the two limiting factors to AI adoption. To address this domestically, the Administration would support a national AI policy framework and use-case and sector-specific regulation. This reiterated the Executive Order issued in December.

The US looks to be a partner of choice in enabling other countries to achieve “AI sovereignty”. AI sovereignty was defined by Kratsios as countries owning and using best-in-class technology while charting their own “national destiny”, with the US as their partner. New initiatives to support this included a Tech Corps (rebranding of the Peace Corps for AI-era technical assistance), and a National Champions Initiative to integrate partner nation companies with the American AI stack and build on the American AI Exports Program.

China 

China continues to champion multilateral AI governance. The Chinese delegation was led by Vice Minister of the Ministry of Science and Technology Chen Jiachang, who emphasized the Global AI Governance Initiative and deepening cooperation on global AI governance in his remarks, as well as the benefits of open-source. Chen also highlighted the World AI Conference (WAIC), due to take place in Shanghai later this year, but made no mention of the World Artificial Intelligence Cooperation Organization (WAICO), despite this being announced at the last WAIC. Pre-summit reporting indicated that China’s aim was to observe and learn without dominating the conversation, with messaging aligned with Premier Li Qiang's framing of AI as a “public good” that can benefit developing countries.

Chinese participation was noticeably more muted than at the Paris AI Action Summit. The Chinese delegation in India was smaller, with fewer public speeches and announcements. Speeches largely reiterated past positions. This differed from the Paris Summit where the China AI Safety and Development Association (CnAISDA) was launched with significant visibility. One reason for this lower profile may be that the Summit coincided with the Chinese New Year holiday.

Chinese media framed the Summit as significant but posing minimal challenge to China's position. The Summit was broadly characterized as a means for India to strategically position itself in the global AI landscape. Commentators acknowledged India’s growing appeal as a major destination for AI investment, but pointed to structural challenges such as underdeveloped regulatory frameworks, energy constraints, and a shortage of high-end AI talent. Commentators argued that these would limit India’s ability to directly compete with China. The gap in both countries’ technological capabilities was further underscored by the controversy around India’s Galgotias University presenting a robotic dog made by Chinese firm Unitree Robotics as its own.

AI Sovereignty and International Cooperation  

“AI sovereignty” emerged as a central theme of the Summit, though countries saw the concept in different ways. There were also some prominent calls for further international cooperation on AI, though such efforts face significant headwinds.

AI Sovereignty

For India, sovereignty meant building indigenous AI capability from the ground up. Indian Prime Minister Narendra Modi outlined this through his MANAV vision where the "National Sovereignty" pillar states, "Data belongs to those who generate it”. This was reinforced by India's push to demonstrate indigenous capability, with the Summit serving as a launchpad for domestically developed AI models. Separately, the Indian Government announced infrastructure-related investment pledges exceeding $250 billion and $20 billion in deep-tech venture commitments, reinforcing the trend from Paris of host countries leveraging the Summit to attract and highlight investment.

For France, sovereignty was framed primarily as strategic autonomy from concentrated technological power. In his opening remarks, French President Emmanuel Macron complimented India for making a “deliberate sovereign choice” to develop task-specific small language models designed to run on smartphones, emphasizing that both Europe and India “chose independence, and both were right”. Macron also positioned European investments in large language models as a counterweight to foreign AI firms.

The US defined sovereignty as countries owning and using best-in-class technology while charting their own “national destiny”. Kratsios cautioned that complete technological self-containment is unrealistic for any country, given the complexity of the AI stack. In practice, the US vision essentially entails countries adopting the American AI stack (comprising chips, models, and infrastructure) under local control, rather than building independent alternatives. 

International Cooperation 

Calls for international AI governance gained some traction, but face significant implementation challenges. Most notably, Altman suggested establishing an IAEA-type body for AI, reviving a call OpenAI first made in mid-2023, arguing that such a mechanism would help countries respond quickly to evolving risks. UN Secretary-General Antonio Guterres also cautioned that the future of AI must not be determined by a small group of countries or private interests. However, it is unclear how an IAEA-type body or existing UN-led processes, such as the Global Dialogue on AI Governance, would play out in practice. The US explicitly rejected global AI governance and UN agencies may struggle to keep pace with the speed of AI development.

Frontier AI

AI executives discussed their timelines to AGI, with some suggesting AGI could be only years away. However, the Summit's official outputs did not match this urgency, with the New Delhi Declaration omitting any reference to frontier AI. Likewise, the commitments made by AI developers in Delhi were fairly mild from a frontier AI perspective, focusing on sharing usage insights and promoting multilingual evaluations.

Accelerating AGI timelines stood in contrast to the Summit’s modest diplomatic outcomes. Altman suggested the world could be "a couple of years away" from early forms of superintelligence, while Hassabis indicated that AGI could arrive within five years, shortening his estimate from the previous year. Against these compressed forecasts, the New Delhi Declaration on AI Impact, the Summit's most prominent diplomatic output, was notably mild. It contained largely aspirational language drawn from past UN resolutions, and omitted any reference to frontier AI or associated risks, in contrast to the declarations from Bletchley and Seoul.

The New Delhi Frontier AI Impact Commitments made by AI developers also reflected the continued shift away from frontier AI concerns. The Commitments focus on advancing understanding of real-world AI usage through anonymized insights and strengthening multilingual and contextual evaluations. This marked a retreat from the Seoul Summit's Frontier AI Safety Commitments, which had required signatories to assess risks across the AI lifecycle, define intolerable risk thresholds, and commit to not developing or deploying frontier models if risks could not be kept below the thresholds. That said, the Seoul Commitments' practical impact has been limited. Several signatories have yet to publish the safety frameworks they pledged to produce, and adherence to transparency and external evaluation has been uneven. 

What to Watch for 

How the US’s and China’s strategic visions for AI play out. Whether the US can translate its vision of AI sovereignty into durable commercial partnerships will become clearer in the coming months. Likewise, China's restrained approach in Delhi raises the question of whether it will pursue its global governance aspirations more assertively at WAIC in Shanghai later this year, such as through further development of WAICO.

Whether middle powers can carve out a meaningful role in AI governance. As Chatham House observed, the Summit could be seen as India's attempt to position itself as an alternative to the US and China on tech leadership. Much will depend on whether India can move beyond aspirational framing to deliver tangible governance outcomes. India's hosting of the BRICS Summit later this year is an opportunity to translate Summit momentum into structured cooperation among developing nations.

Two other middle powers are next to host major AI convenings. The International Scientific Exchange to update the Global AI Safety Research Priorities and the next AI Summit are scheduled to take place in Singapore and Switzerland, respectively. This suggests that there could be room for countries traditionally seen as consensus-building states to play a larger role in shaping AI governance discussions at the international level.

Next
Next

AI Integrity: Defending Against Backdoors and Secret Loyalties