Anthropic chief executive officer Dario Amodei said artificial intelligence (AI) companies, including his own, should be subject to mandatory testing requirements to ensure their technologies are safe for the public before release.
“I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it,” Amodei said in response to a question on the topic Wednesday at an AI safety summit in San Francisco hosted by the US Departments of Commerce and State.
Amodei’s remarks came after the US and UK AI Safety Institutes released the results of their testing of Anthropic’s Claude 3.5 Sonnet model in a range of categories, including for cybersecurity and biological capabilities. Anthropic, along with rival OpenAI, had previously agreed to submit their AI models to government groups.
Amodei noted there is a patchwork of voluntary, self-imposed safety guidelines that major companies have agreed to, such as Anthropic’s responsible scaling policy and OpenAI’s preparedness framework, but he said more explicit requirements are needed.
“There’s nothing to really verify or ensure the companies are really following those plans in letter of spirit. They just said they will,” Amodei said. “I think just public attention and the fact that employees care has created some pressure, but I do ultimately think it won’t be enough.”
Amodei’s thinking is partly informed by his belief that more powerful AI systems that can outperform even the smartest human beings could come as soon as 2026. While AI companies are testing for biological threats and other catastrophic harms that are still hypothetical, he stressed these risks could become real very quickly.
At the same time, Amodei cautioned that testing requirements should remain “flexible” to account for how quickly the technology is changing. “We’re going to have to solve a very difficult socio-political problem,” he said.