Defense Secretary Pete Hegseth deemed artificial intelligence firm Anthropic a “supply chain risk to national security” on Friday, following days of increasingly heated public conflict over the company’s effort to place guardrails on the Pentagon’s use of its technology.
Hegseth declared on X that effective immediately, “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” The decision could have a wide-ranging impact, given the sheer number of companies that contract with the Pentagon.
“America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final,” Hegseth wrote.
President Trump announced earlier Friday that all federal agencies must “immediately” stop using Anthropic, though the Defense Department and certain other agencies can continue using its AI technology for up to six months while transitioning to other services.
CBS News has reached out to Anthropic for comment.
The decision to cut off Anthropic came after a dispute with the Pentagon that highlighted sweeping disagreements about the role of AI in national security and the potential risks that the powerful technology could pose.
The company — which is the only AI firm whose model is deployed on the Pentagon’s classified networks — has sought guardrails that prevent its technology from being used to conduct mass surveillance of Americans or carry out military operations without human approval. But the Pentagon insisted any deal should allow use Anthropic’s Claude model for “all lawful purposes.”
The Pentagon had given Anthropic a deadline of Friday at 5:01 p.m. to either reach an agreement or lose out on its lucrative contracts with the military.
The military’s position is that it’s already illegal for the Pentagon to conduct mass surveillance of Americans, and internal policies restrict the military from using fully autonomous weapons. As talks between the two sides broke down this week, Pentagon officials have publicly accused the company of seeking to impose his own views onto the military.
Hegseth called Anthropic “sanctimonious” and arrogant on Friday, and accused it of trying to “strong-arm the United States military into submission.”
“Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable,” Hegseth alleged.
But Anthropic CEO Dario Amodei has argued that guardrails are necessary because Claude is not infallible enough to power fully autonomous weapons and a powerful AI model could raise serious privacy concerns. He says the company understands that military decisions are made by the Pentagon and has never tried to limit the use of its technology “in an ad hoc manner.”
“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said in a statement Thursday. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
Amodei has been outspoken for years about the potential risks posed by unchecked AI technology, and has backed calls for safety and transparency regulations.
On Thursday, the eve of the military’s deadline to reach a deal, the Pentagon’s chief technology officer Emil Michael told CBS News that the Pentagon had made concessions, offering written acknowledgements of the federal laws and internal military policies that restrict mass surveillance and autonomous weapons.
“At some level, you have to trust your military to do the right thing,” said Michael, who also noted, “we’ll never say that we’re not going to be able to defend ourselves in writing to a company.”
Anthropic called that offer inadequate. A company spokesperson said the new language was “paired with legalese that would allow those safeguards to be disregarded at will.”
