Technology
7 min read
Former OpenAI Policy Chief Launches AVERI for Independent AI Safety Audits
the-decoder.com
January 19, 2026•3 days ago

AI-Generated SummaryAuto-generated
Miles Brundage, formerly of OpenAI, has launched the AI Verification and Evaluation Research Institute (AVERI). This nonprofit aims to conduct independent safety audits of advanced AI models, addressing concerns that companies are currently self-regulating. AVERN has raised $7.5 million and proposes a tiered assurance framework. The initiative seeks to increase accountability through external validation of AI safety.
Miles Brundage, who led policy research at OpenAI for seven years, is calling for external audits of leading AI models through his new institute AVERI. The industry should no longer be allowed to grade its own homework.
Miles Brundage has founded the AI Verification and Evaluation Research Institute (AVERI), a nonprofit organization advocating for independent safety audits of frontier AI models. Brundage left OpenAI in October 2024, where he served as an advisor on how the company should prepare for the advent of artificial general intelligence.
"One of the things I learned while working at OpenAI is that companies are figuring out the norms of this kind of thing on their own," Brundage told Fortune. "There's no one forcing them to work with third-party experts to make sure that things are safe and secure. They kind of write their own rules."
The leading AI labs do conduct safety testing and publish technical reports, sometimes with external red team organizations. But consumers and governments currently have to simply trust what the labs say.
Insider donations hint at industry unease
AVERI has raised $7.5 million so far and is aiming for $13 million to cover 14 staff members. Funders include former Y Combinator president Geoff Ralston and the AI Underwriting Company. Notably, the institute has also received donations from employees at leading AI companies. "These are people who know where the bodies are buried," Brundage said, "and who would like to see more accountability."
Alongside the launch, Brundage and more than 30 AI safety researchers and governance experts published a research paper outlining a detailed framework for independent audits. The paper proposes "AI Assurance Levels" - Level 1 roughly matches the current state with limited third-party testing and restricted model access, while Level 4 would provide "treaty-grade" assurance robust enough to serve as a foundation for international agreements between nations.
Insurers and investors could force the issue
Even without government mandates, several market mechanisms could push AI companies toward independent audits, Brundage believes. Large enterprises deploying AI models for critical business processes might require audits as a condition of purchase to protect themselves against hidden risks.
Rate this article
Login to rate this article
Comments
Please login to comment
No comments yet. Be the first to comment!
