Amodei’s remarks came after the US and UK AI Safety Institutes released the results of their testing of Anthropic’s Claude 3.5 Sonnet model in a range of categories, including for cybersecurity and biological capabilities. Anthropic, along with rival OpenAI, had previously agreed to submit their AI models to government groups.
Anthropic chief executive officer Dario Amodei said artificial intelligence (AI) companies, including his own, should be subject to mandatory testing requirements to ensure their technologies are safe for the public before release.
“I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it,” Amodei said in response to a question on the topic Wednesday at an AI safety summit in San Francisco hosted by the US Departments of Commerce and State.

