(Feb 25): Anthropic PBC, known for its commitment to artificial intelligence safeguards, has loosened its central safety policy, saying the change is necessary to remain competitive.
The company in 2023 said in its Responsible Scaling Policy that it would delay AI development that might be dangerous. In a Tuesday (Feb 24) blog post, Anthropic said it was updating its rules to say it would no longer do so if it believes it lacks a significant lead over a competitor.
“The policy environment has shifted toward prioritising AI competitiveness and economic growth, while safety-oriented discussions have yet to gain meaningful traction at the federal level,” Anthropic said in its post.
Anthropic, recently valued at US$380 billion, is racing for uptake with businesses and everyday users, battling the likes of OpenAI, Google and Elon Musk’s xAI Corp for dominance in what many view as a revolutionary new technology.
“From the beginning, we’ve said the pace of AI and uncertainties in the field would require us to rapidly iterate and improve the policy,” an Anthropic spokesperson said.
The updated policy was earlier reported by Time.
See also: From pilots to P&L: Why Budget 2026 matters for C‑suites betting on AI‑enabled workplaces
Uploaded by Arion Yeow

