Floating Button
Home News Tech

Anthropic rejects latest US military offer, escalating AI feud

Maggie Eastland & Katrina Manson / Bloomberg
Maggie Eastland & Katrina Manson / Bloomberg • 4 min read
Anthropic rejects latest US military offer, escalating AI feud
Anthropic said that while the Pentagon’s latest proposal fell short, the company continues to negotiate with defence officials and remains committed to working with the military.
Font Resizer
Share to Whatsapp
Share to Facebook
Share to LinkedIn
Scroll to top
Follow us on Facebook and join our Telegram channel for the latest updates.

(Feb 27): Anthropic PBC rejected the Pentagon’s latest offer in a dispute over safeguards around the use of its artificial intelligence (AI) technology by the US military, escalating the stand-off a day before a government deadline for the company to drop its restrictions or face severe consequences.

“These threats do not change our position: we cannot in good conscience accede to their request,” Anthropic chief executive officer Dario Amodei said in a statement on Thursday.

A company spokesperson said that new contract language from the Pentagon failed to satisfy the firm’s desire for curbs on military use of its AI tools. The company continues to insist on two specific restrictions: It doesn’t want its technology used for surveillance of US citizens or for autonomous lethal strikes without a human in the loop.

Anthropic said that while the Pentagon’s latest proposal fell short, the company continues to negotiate with defence officials and remains committed to working with the military. Since its founding, Anthropic has positioned itself as a company focused on the responsible use of AI with a goal of avoiding catastrophic harms from the technology.

The dispute centres on the company’s insistence that guardrails accompany military use of its Claude AI tool that the Pentagon sees as unnecessary. Defence officials have previously rejected Anthropic’s demands and insist on being able to run Claude — one of the only AI tools cleared for classified cloud work — without any restrictions from the company.

Under Secretary of Defense for Research and Engineering Emil Michael, who’s helping drive the Pentagon’s AI strategy, responded harshly to Amodei’s rejection. “He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk,” Michael wrote on X, accusing the Anthropic chief of having “a God-complex”. Anthropic did not immediately respond to a request seeking response to Michael’s comments outside of regular office hours.

See also: CoreWeave shares drop after heavy spending alarms investors

If Anthropic fails to drop its conditions, the Defense Department has vowed to declare the company a supply-chain risk, a move that would preclude it from working with other defence contractors. The Pentagon has also threatened to invoke the Cold War-era Defense Production Act to use Anthropic’s software over the company’s objections.

At stake is up to US$200 million ($252.96 million) in work that Anthropic had agreed to do for the military, along with contracts for other government agencies that could also be imperiled. Amodei said he hopes the Defense Department will revisit its current position of only working with contractors who will agree to an all-lawful-use standard.

“It is the department’s prerogative to select contractors most aligned with their vision,” Amodei said. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”

See also: OpenAI finalises US$110 bil funding at US$730 bil value

Now valued at roughly US$380 billion, Anthropic was the first AI company granted Pentagon clearance to handle classified material, and its Claude Gov tool quickly became a favored option among defence personnel for its ease of use. The firm faces growing competition from Elon Musk’s xAI, which just won approval for classified work, as well as rivals OpenAI and Google’s Gemini.

The Defense Department’s latest contract terms were framed by US officials as a compromise, but they included legal language that could allow Anthropic’s two safeguards on surveillance and autonomous weaponry to be ignored, the company spokesperson added.

Amodei said that Anthropic understands that the Defense Department, and not private companies, makes final decisions for the US military. He said the company’s two conditions aren’t an attempt to set policy but rather seek to ensure that still nascent — and occasionally error-prone — AI technology isn’t used in way that exceeds its current capabilities.

The feud erupted just weeks after the Pentagon published a new strategy on AI that called for making the military an “AI-first” force by increasing experimentation with frontier models and reducing bureaucratic barriers to use. The approach specifically urged the Defense Department to choose models that are “free from usage policy constraints that may limit lawful military applications”.

Defence officials have reiterated that the military would use Anthropic’s technology within the bounds of the law. The Pentagon has no interest in mass surveillance or developing “autonomous weapons that operate without human involvement”, spokesman Sean Parnell said earlier on Thursday, addressing the company’s two main concerns.

“We will not let any company dictate the terms regarding how we make operational decisions,” Parnell wrote in a post on X. “They have until 5.01pm ET on Friday to decide. Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk.”

Uploaded by Tham Yek Lee

×
The Edge Singapore
Download The Edge Singapore App
Google playApple store play
Keep updated
Follow our social media
© 2026 The Edge Publishing Pte Ltd. All rights reserved.