Appeals Court Rules Against Anthropic in AI Dispute with Pentagon

Apr 10, 2026, 2:24 AM
Image for article Appeals Court Rules Against Anthropic in AI Dispute with Pentagon

Hover over text to view sources

A federal appeals court has ruled against artificial intelligence laboratory Anthropic, refusing to block the Pentagon from blacklisting the company in a case that highlights ongoing tensions between the AI sector and the Trump administration's policies.
The US Court of Appeals in Washington, DC, declined Anthropic's request for an order that would protect the San Francisco-based firm from consequences related to the deployment of its Claude chatbot in potentially autonomous weapons and surveillance applications.
This ruling represents a setback for Anthropic, which previously achieved a favorable outcome in a related case in San Francisco federal court. In that ruling, US District Judge Rita Lin determined that the Trump administration had overstepped its authority by labeling the company as a national security risk and restricting its participation in defense contracts.
Anthropic initiated two lawsuits, one in San Francisco and the other in Washington, arguing that the Trump administration was conducting an "unlawful campaign of retaliation" against the firm as it sought to impose limits on the use of its AI technology. The administration has criticized Anthropic as a liberal-leaning entity attempting to influence US military policy.
In the San Francisco ruling, the court's decision prompted the Trump administration to remove the negative labels from Anthropic, allowing government employees and contractors to continue utilizing the Claude chatbot and other AI technologies. However, the appeals court in Washington did not find sufficient grounds to overturn the administration's actions, despite acknowledging that Anthropic would likely face "some degree of irreparable harm" if designated a supply chain risk. The court indicated that the full extent of Anthropic's financial harm was not clearly established.
Further proceedings in the case are anticipated, with a hearing scheduled for May 19 to present additional evidence. In response to the ruling, Anthropic expressed gratitude for the court's recognition of the need for a swift resolution, maintaining confidence that the judiciary will ultimately deem the supply chain designations unlawful.
The conflicting court decisions between the San Francisco and Washington cases have raised alarms in the tech community. Matt Schruers, CEO of the Computer & Communications Industry Association, highlighted the uncertainty created by the Pentagon's actions and the DC Circuit's ruling at a crucial moment for US companies competing globally in the AI domain. Schruers emphasized that this situation could hinder the US tech sector's ability to maintain its leadership position in artificial intelligence innovation.
As the legal battles continue, the implications for Anthropic and the broader AI industry remain to be seen, particularly as the US navigates a complex landscape of national security concerns and technological advancement.

Related articles

Congress Targets Global Chip Equipment in AI Strategy

The US Congress is advancing legislation restricting the export of semiconductor manufacturing equipment to enhance domestic competitiveness in artificial intelligence while aiming to curb China's technological advancements. Bipartisan bills, including the STRIDE Act and the MATCH Act, are set to enforce stricter export controls and align international partners with US policies.

License Plate Readers: A Powerful Tool Against Crime

License plate reader (LPR) technology is emerging as a crucial tool for law enforcement to combat rising crime rates. By providing real-time alerts and data on vehicle movements, LPRs enhance public safety and officer efficiency while raising important legal and ethical considerations regarding privacy.

Kelly and Fitzpatrick Challenge Trump on AFL-CIO's AI Stance

Congressmen Mike Kelly and Brian Fitzpatrick are urging former President Donald Trump to reconsider his position on artificial intelligence as it relates to the AFL-CIO. Their push emphasizes the need for a balanced approach to AI regulation that protects workers while fostering innovation.

FSU Shooting Lawsuit Targets ChatGPT Amid Big Tech Accountability Push

Following the tragic mass shooting at Florida State University in April 2025, the family of victim Robert Morales plans to file a lawsuit against ChatGPT's parent company, OpenAI. This legal action has sparked renewed calls from Florida Congressman Jimmy Patronis for greater accountability of Big Tech companies, specifically challenging the protections provided under federal law.

Landmark Verdicts Signal New Accountability for Big Tech

Recent jury verdicts against Meta and Google may herald a new era of accountability for big tech companies. These decisions focus on the harm caused by social media platforms, particularly to young users, and could pave the way for more lawsuits and regulatory changes.