Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology

Mar 1, 2026, 2:17 AM
Image for article Trump Orders Federal Agencies to Halt Use of Anthropic AI Technology

Hover over text to view sources

President Donald Trump has ordered all US federal agencies to immediately cease using technology developed by the artificial intelligence company Anthropic. This directive marks a significant escalation in a public dispute between the administration and the company concerning the safety and ethical implications of AI technology in military contexts.
The conflict reached a boiling point after the Pentagon demanded that Anthropic loosen its ethical guidelines regarding the use of its AI systems by a Friday deadline. Anthropic's CEO, Dario Amodei, stated that the company "cannot in good conscience accede" to demands that would allow for potential mass surveillance of American citizens or the development of fully autonomous weapons.
In a post on Truth Social, Trump criticized Anthropic, labeling its executives as "left-wing nut jobs" and asserting that the company was attempting to dictate military operations. He emphasized that the United States would not allow a tech firm to impose its terms on national security, further stating, "We don't need it, we don't want it, and will not do business with them again!".
Defense Secretary Pete Hegseth reiterated the administration's stance by designating Anthropic as a "supply chain risk," a classification typically reserved for foreign adversaries. This designation threatens to sever Anthropic's critical partnerships with other businesses, potentially crippling its rapid rise as one of the world's most valuable AI startups.
Anthropic responded to the government's actions by stating its intention to challenge what it described as an unprecedented and legally unsound move. The company expressed dismay over the escalating situation, claiming that the Pentagon's demands were incompatible with American principles of ethical AI use.
The Pentagon's push to obtain unrestricted access to Anthropic's AI systems has drawn criticism from both within and outside the government. Critics argue that the government's actions could be politically motivated rather than based on sound national security analysis. Virginia Senator Mark Warner, a Democrat, expressed concern that the inflammatory rhetoric surrounding Anthropic could undermine careful national security decisions.
Interestingly, the dispute has sparked a wave of support for Anthropic from its top rivals in Silicon Valley. Notably, OpenAI's CEO Sam Altman, who has a contentious history with Anthropic, publicly sided with the company. He stated that OpenAI shares the same ethical concerns regarding mass surveillance and autonomous weapons.
The tension surrounding AI technology in military applications reflects broader concerns about the role of AI in national security. As AI systems grow more sophisticated, the potential for misuse in high-stakes situations raises critical ethical questions. Experts have warned that such public disputes could hinder innovation in US AI companies, providing opportunities for adversaries like China and Russia to gain a competitive edge.
As the situation evolves, the Pentagon has indicated it will continue to use Anthropic's AI technology for a transition period of six months. However, Trump's administration has warned that any failure to cooperate during this period could result in "major civil and criminal consequences" for the company.
The public nature of this conflict highlights the complexities and challenges of integrating cutting-edge technology into military operations while ensuring ethical standards are upheld. With the future of Anthropic's contracts hanging in the balance, the outcome of this dispute may have lasting implications for the relationship between Silicon Valley and the US government.
The developments have not only sent ripples through the AI community but also raised significant questions about the governance of AI technologies in sensitive areas such as defense. As both sides prepare for potential legal battles and continued negotiations, the unfolding situation underscores the contentious intersection of technology, ethics, and national security.
O'Brien reported from Providence, Rhode Island.

Related articles

OpenAI Secures Pentagon Deal Amid Controversy Over AI Use

OpenAI has reached a deal with the Pentagon to utilize its AI models within classified military networks. This agreement comes in the wake of President Trump's order for federal agencies to cease using rival Anthropic's technology due to concerns over its ethical use in military operations.

Trump Bans Anthropic AI Tech, Orders Federal Agencies to Cease Use

In a significant escalation of tensions, President Trump has ordered all US federal agencies to stop using AI technology from Anthropic, branding the company a national security risk. This decision follows Anthropic's refusal to allow its technology to be used for mass surveillance or autonomous weapons, leading to a six-month phase-out period.

U.S. Military Uses Laser to Down CBP Drone, Prompting Airspace Closures

The US military deployed a laser to shoot down a Customs and Border Protection drone near the US-Mexico border, leading to increased airspace restrictions. Lawmakers expressed outrage, citing inadequate coordination among agencies and demanding investigations into the incident.

Trump Criticizes AI Company for Ignoring Military Safety Concerns

Former President Donald Trump has publicly criticized an artificial intelligence company for maintaining its safety guardrails, which the US military reportedly wants lifted. His comments highlight a growing tension between technological innovation and national security.

Trump Bans Anthropic's AI from U.S. Government Use Amid Security Concerns

President Trump has prohibited the US government from using the AI services of Anthropic, citing national security risks. This decision follows a contentious dispute over the company's restrictions against using its technology for mass surveillance and autonomous weapons.