WASHINGTON (AP) — The Pentagon’s top spokesman has reiterated that the military wants to use Anthropic’s artificial intelligence technology in legal ways and will not let the company dictate any limits ahead of a Friday deadline to agree to its demands.
Sean Parnell said Thursday on social media that the Pentagon “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent their models from being used for those purposes. It’s the last of its peers to not supply its technology to a new U.S. military internal network.
Parnell said the Pentagon wants to “use Anthropic’s model for all lawful purposes” but didn’t offer details on what that entailed. He said opening up use of the technology would prevent the company from “jeopardizing critical military operations.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he said.
During a meeting on Tuesday between Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei, military officials warned that they could designate Anthropic as a supply chain risk, cancel its contract or invoke a Cold War-era law called the Defense Production Act to give the military more sweeping authority to use its products, even if the company doesn’t approve.
Parnell mentioned only two of those consequences in the Thursday post on X and said Anthropic has “until 5:01 PM ET on Friday to decide.”
“Otherwise, we will terminate our partnership with Anthropic and deem them a supply chain risk,” he wrote.
Anthropic didn’t immediately respond to a request for comment Thursday. It said in a statement after Tuesday’s meeting that it “continued good-faith conversations about our usage policy to ensure Anthropic can continue to support the government’s national security mission in line with what our models can reliably and responsibly do.”





Comments