
The Trump administration on Friday directed all federal agencies to discontinue use of Anthropic’s artificial intelligence systems and announced additional punitive measures, escalating a highly visible dispute over AI safety and national security standards.
Defense Secretary Pete Hegseth stated that Anthropic would be classified as a “supply chain risk,” a designation that could effectively block U.S. defense contractors from partnering with the firm. His announcement, made via social media, followed a Pentagon ultimatum requiring Anthropic to permit unrestricted military deployment of its AI tools or face repercussions. Nearly a day earlier, CEO Dario Amodei had said the company could not “in good conscience” agree to the Defense Department’s terms.
President Donald Trump sharply criticized the company, accusing it of mishandling negotiations with the Pentagon. Posting on Truth Social, Trump ordered most federal agencies to halt use of Anthropic’s AI immediately, while granting the Pentagon a six-month window to remove the technology from systems where it is already integrated.
At the heart of the disagreement was the role of artificial intelligence in national security operations. Anthropic, developer of the chatbot Claude, said it had sought specific guarantees that its technology would not be employed for mass domestic surveillance or fully autonomous weapons. However, after private negotiations became public, the company argued that revised contract language—though presented as a compromise—contained provisions that could override those safeguards.
While Anthropic may be financially capable of absorbing the loss of the contract, Hegseth’s warning carried broader implications. Being labeled a supply chain risk—a classification more commonly associated with foreign adversaries—could jeopardize key business relationships at a critical moment in the company’s rapid ascent from a small San Francisco research lab to one of the world’s most highly valued startups.
Trump further warned that Anthropic might face significant civil or criminal liability if it failed to cooperate during the phase-out process. The company did not immediately respond to requests for comment. Prior to the president’s announcement, senior Trump officials from both the Pentagon and the State Department had publicly criticized Anthropic’s refusal to meet the administration’s demands.
Sen. Mark Warner, the leading Democrat on the Senate Intelligence Committee, said the penalties—combined with hostile rhetoric—raised questions about whether national security decisions were being guided by objective analysis or political motives.
The dispute reverberated through Silicon Valley. Employees at rival firms such as OpenAI and Google expressed support for Amodei’s position through public letters and online forums.
The administration’s action could advantage Elon Musk’s chatbot Grok, which the Pentagon intends to connect to classified military networks. The move may also signal to other AI contractors—including Google and OpenAI—that similar resistance could bring consequences.
Musk backed the administration, writing on his platform X that “Anthropic hates Western Civilization.” Meanwhile, Sam Altman, CEO of OpenAI and a former colleague of Amodei, questioned what he described as the Pentagon’s “threatening” stance in an interview with CNBC, suggesting that much of the AI industry shares similar boundaries around military applications.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they genuinely prioritize safety,” Altman said.
Retired Air Force Gen. Jack Shanahan also weighed in, warning that singling out Anthropic might generate attention but would ultimately harm broader national interests. He noted that Claude is already widely used across government agencies, including in classified environments, and described Anthropic’s proposed safeguards as reasonable. Shanahan added that large language models powering chatbots like Claude are not yet sufficiently mature for critical national security roles, especially in fully autonomous weapons systems.
“They’re not playing games here,” he wrote on LinkedIn.




