Responsible Reporting for Frontier AI Development

This is a link post for a paper which was led by Noam Kolt at the University of Toronto. IAPS researcher Asher Brass was among the paper’s coauthors.

Abstract: Mitigating the risks from frontier AI systems requires up-to-date and reliable information about those systems. Organizations that develop and deploy frontier systems have significant access to such information. By reporting safety-critical information to actors in government, industry, and civil society, these organizations could improve visibility into new and emerging risks posed by frontier systems. Equipped with this information, developers could make better informed decisions on risk management, while policymakers could design more targeted and robust regulatory infrastructure. We outline the key features of responsible reporting and propose mechanisms for implementing them in practice.

Voluntary implementation

The first pathway involves implementing responsible reporting as part of voluntary governance. In this scenario, we propose the following institutional mechanisms:

  • Differential disclosure. Concerns regarding the protection of intellectual property and commercially sensitive information are largely addressed by features of the framework already discussed.

  • Anonymized reporting. To protect developers’ reputations, certain potentially damaging information disclosed under the framework could be de-identified, such that it could not be attributed to a particular developer and would not tarnish their reputation.

  • Organizational pre-commitments. Developers could collectively commit in advance to participate in the reporting framework. Such commitments could be supported by a bondlike regime in which developers make upfront payments (prior to joining the framework) that are incrementally returned to developers contingent on their good faith participation in the framework.

Regulatory implementation

The second pathway involves integrating responsible reporting into a broader purpose-built regulatory regime, such as the U.S. executive order or EU AI Act. In this scenario, we suggest the following institutional mechanisms will help facilitate more informative and actionable reporting:

  • Liability safe harbors. Developers’ reluctance to disclose information that may increase their legal exposure could be tackled by regulation that introduces safe harbor provisions that protect companies from legal liability arising from participation in the reporting framework.

  • Government resourcing. Under a purpose-built regulatory regime, government actors could be allocated resources to develop the technical and governance capacity to protect, analyze, and effectively respond to information disclosed under the framework.

  • Enforcement. If regulations imposed legal sanctions in the event of negligent or deliberate misreporting, developers would be strongly incentivized to establish organizational processes for ensuring good faith and effective reporting. Independent auditors approved by regulators could also assist in detecting misreporting.

Previous
Previous

Highlights for Responsible AI from the Biden Administration's FY2025 Budget Proposal

Next
Next

AI-Relevant Regulatory Precedents: A Systematic Search Across All Federal Agencies