ru24.pro
News in English
Январь
2026
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
31

Pentagon and Anthropic at Odds Over Military AI Applications

0
eWeek 

“Gentlemen, you can’t fight in here! This is the War Room.”

It’s not quite “Dr Strangelove” but there is a strange kind of love going on in the world of military AI.

The Pentagon is in a battle with AI developer Anthropic over safeguards that would prevent the government from deploying its technology to target weapons autonomously and conduct U.S. domestic surveillance, three people familiar with the matter told Reuters.

The discussions represent an early test case for whether Silicon Valley, now largely back in Washington’s good graces after years of strained relations, can shape how U.S. military and intelligence agencies deploy increasingly powerful AI on the battlefield and beyond.

After months of negotiations under a contract worth up to $200 million, the U.S. Department of Defense and Anthropic are at a standstill, according to six people familiar with the matter, all of whom spoke on condition of anonymity to discuss sensitive talks.

The disagreement highlights growing tension between national security agencies eager to harness commercial AI at scale and technology companies attempting to impose limits on how their tools are used, particularly in lethal or domestic contexts.

The standoff

The company’s position on how its AI tools can be used has intensified disagreements between it and the Trump administration, the details of which have not been previously reported.

Anthropic, founded by former OpenAI executives and known for emphasizing AI safety, has pushed for contractual safeguards that would restrict how its models can be deployed. In discussions with government officials, Anthropic representatives raised concerns that their tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight, some of the sources said.

Pentagon officials, however, have bristled at those restrictions. In line with a Jan. 9 department memo on AI strategy, officials have argued they should be free to deploy commercial AI technology regardless of a company’s internal usage policies, as long as such deployments comply with U.S. law.

That stance reflects a broader view within the Pentagon that operational decisions should rest with elected leaders and military commanders, not private technology firms. Officials worry that allowing companies to impose guardrails could limit flexibility in fast-moving or high-risk scenarios.

A spokesperson for the Defense Department, which the Trump administration renamed the Department of War, did not immediately respond to requests for comment.

Anthropic said its AI is “extensively used for national security missions by the U.S. government and we are in productive discussions with the Department of War about ways to continue that work.”

Despite the friction, Pentagon officials would likely still need Anthropic’s cooperation if they wish to proceed. The company’s models are trained to avoid taking steps that could lead to harm, and Anthropic staff would be responsible for modifying or fine-tuning the systems for specific defense applications, several sources said.

Implications for military AI and industry Influence

The spat could threaten Anthropic’s Pentagon business at a particularly sensitive moment for the company. The San Francisco-based startup is preparing for an eventual public offering and has invested heavily in courting U.S. national security customers. It has also sought a prominent role in shaping government AI policy, positioning itself as both a supplier and a thought leader on responsible AI deployment.

Anthropic is one of a handful of major AI developers awarded Pentagon contracts last year, alongside Alphabet’s Google, Elon Musk’s xAI, and OpenAI. Those deals were seen as a signal that the Defense Department was moving more aggressively to integrate commercial AI into defense planning, logistics, intelligence analysis, and potentially weapons systems.

But the current standoff underscores a fundamental question facing the U.S. government: whether reliance on commercial AI providers will also mean accepting limits set by those companies, or whether national security imperatives will override corporate policies.

For Silicon Valley, the outcome could shape future relationships with Washington. If Anthropic succeeds in enforcing its safeguards, it may embolden other companies to demand similar restrictions. If it fails, firms may have to decide whether to walk away from lucrative government contracts or accept broader uses of their technology.

Political backdrop

In an essay published this week on his personal blog, Anthropic CEO Dario Amodei argued that AI should support national defense “in all ways except those which would make us more like our autocratic adversaries.” The comment reflects a view among some AI researchers that democratic governments should impose stricter ethical limits on emerging technologies, even if rivals do not.

Amodei was also among Anthropic’s co-founders who criticized fatal shootings of U.S. citizens protesting immigration enforcement actions in Minneapolis, which he described as a “horror” in a post on X. Those deaths have intensified concern among some in Silicon Valley about how government agencies might use advanced technology, including AI, in ways that could contribute to violence or civil liberties abuses.

As AI systems grow more capable and more deeply embedded in national security operations, the Anthropic-Pentagon dispute illustrates the broader struggle over who gets to define the rules.

Darren Aronofsky is reimagining the Revolutionary War through the use of AI.

The post Pentagon and Anthropic at Odds Over Military AI Applications appeared first on eWEEK.