OpenAI Revises Military Deal Amid User Backlash

Mar 4, 2026, 2:54 AM
Image for article OpenAI Revises Military Deal Amid User Backlash

Hover over text to view sources

OpenAI's CEO Sam Altman has publicly acknowledged that the company rushed into a deal with the US Department of Defense (DOD), which has sparked significant backlash from its user base. In a post on X, Altman described the agreement as appearing "opportunistic and sloppy," prompting the company to amend its contract with the military to include clearer guidelines on the use of its AI technology.
The deal was announced shortly after President Donald Trump directed federal agencies to cease using AI tools from competitor Anthropic, which had refused to comply with DOD's demands for using its technology for domestic surveillance and autonomous weapons. OpenAI's swift acquisition of the DOD contract, amid these tensions, raised alarms among its civilian user community, who feared potential implications for privacy and security.
Altman's internal memo highlighted that the new language added to the agreement explicitly states that OpenAI's AI systems "shall not be intentionally used for domestic surveillance of US persons and nationals." This effort aims to address concerns that the technology could be misused for mass surveillance or military operations without adequate oversight. However, critics argue that the reliance on legal definitions to limit such uses creates significant loopholes that could be exploited by the government should laws change in the future.
Despite OpenAI's claims that its revised deal includes "more guardrails than any previous agreement for classified AI deployments," skepticism remains regarding the effectiveness of these measures. The revisions still permit surveillance under certain legal frameworks, which some experts and users interpret as a lack of genuine commitment to preventing misuse of the technology. Political researcher Tyson Brody remarked on social media that the terms could lead to "incidental collection" of data, undermining the intended safeguards.
Altman has reiterated that OpenAI's approach will primarily adhere to legal standards rather than ethical considerations, a stance that has drawn criticism from various quarters. He stated, "We want to have a voice, and a seat at the table where we can share our expertise, and to fight for principles of liberty," but he maintained that the government should ultimately make key societal decisions regarding AI use.
The backlash from OpenAI's user base has been tangible, with reports indicating a 295 percent increase in uninstalls of the ChatGPT app following the announcement of the deal. This surge in cancellations reflects widespread dissatisfaction and distrust among users regarding the potential implications of OpenAI's partnership with the DOD, as many have turned to Anthropic's Claude AI as an alternative, which has climbed to the top of app store rankings since the controversy erupted.
While OpenAI's revised agreement aims to clarify its principles and prevent the misuse of its technology, the persistent concerns about its role in military operations and potential surveillance highlight the complexities and challenges facing AI companies in balancing innovation with ethical responsibilities. The ongoing discourse emphasizes the urgent need for transparency and accountability in the development and deployment of advanced AI systems in sensitive domains like national defense.
As the situation continues to evolve, it remains crucial for both technology providers and policymakers to engage in open dialogue to ensure that the deployment of AI aligns with societal values and legal standards, safeguarding privacy and civil liberties in the process. The future of AI in military applications will undoubtedly be shaped by these discussions and the evolving relationship between private companies and government entities.

Related articles

U.S. Government's iPhone Hacking Toolkit Leaked to Foreign Criminals

A sophisticated iPhone hacking toolkit known as Coruna, believed to have originated from US government efforts, is now in the possession of foreign espionage actors and cybercriminals. This alarming leak raises significant concerns over cybersecurity and the proliferation of state-level surveillance tools.

AI Standoff: A Battle for Control of Military Technology

The recent conflict between the Pentagon and AI firm Anthropic raises questions about the control of military technology in the US Following Anthropic's blacklisting and OpenAI's contract win, the outcome may redefine the balance of power between the government and AI developers, significantly impacting future military operations.

Colorado Lawmakers Clash Over Surveillance Technology Regulations

In Colorado, bipartisan efforts are underway to regulate surveillance technology amidst growing concerns over privacy and data security. Lawmakers are debating bills that would limit law enforcement's access to personal data and restrict technologies like facial recognition and license plate readers. These discussions reflect broader tensions between public safety and individual rights.

Trump Orders Halt on Anthropic AI Use After Pentagon Tensions

In a significant escalation of tensions, President Trump has ordered all US federal agencies to cease the use of Anthropic's AI products following the company's refusal to allow unrestricted military applications. The Pentagon has labeled Anthropic a national security risk, complicating its future dealings with government contractors.

OpenAI Secures Pentagon Deal Amid Controversy Over AI Use

OpenAI has reached a deal with the Pentagon to utilize its AI models within classified military networks. This agreement comes in the wake of President Trump's order for federal agencies to cease using rival Anthropic's technology due to concerns over its ethical use in military operations.