The AI War Between Anthropic and Pentagon Exposes a Deep US Surveillance Gap
A heated conflict between government agencies over AI use reveals legal ambiguities surrounding mass surveillance of Americans.
The ongoing public feud between the Department of Defense (DoD) and AI company Anthropic has brought to light an unsettling question: Does U.S. law actually permit the government to conduct mass surveillance on its citizens? This issue is particularly pertinent given that it emerged from a high-stakes negotiation over the use of artificial intelligence.
From Snowden’s Revelations to Today
Much has changed since Edward Snowden exposed the National Security Agency's (NSA) bulk metadata collection program in 2013. Yet, more than a decade later, the legal landscape remains murky when it comes to domestic surveillance by government agencies.
The Anthropic Standoff
At the heart of this controversy is the Pentagon’s desire to use Anthropic's AI Claude for analyzing bulk commercial data on Americans. However, Anthropic has made clear that its technology will not be used in any form of mass domestic surveillance or autonomous weapons systems.
The negotiations between these two entities broke down after a week, leading the DoD to label Anthropic as a "supply chain risk." This designation is typically reserved for foreign companies deemed a threat to national security. The move was seen by many as an attempt to pressure Anthropic into compliance or force it out of business.
OpenAI’s Deal and Public Backlash
In contrast, OpenAI—the company behind ChatGPT—sealed a deal with the Pentagon that allowed its AI for "all lawful purposes." Critics argue this language is too vague and could potentially be used to justify domestic surveillance. The ambiguity of such terms has led many users to uninstall their applications in protest.
Over the weekend, there was a significant drop in ChatGPT usage as users expressed concern over privacy and government oversight. Protesters chalked messages around OpenAI’s headquarters: "What are you doing with our data?" This backlash highlights the growing public unease about AI's role in surveillance.
The Broader Implications
This conflict between Anthropic and DoD is not just a corporate battle; it reflects deeper issues within U.S. tech policy. The legal framework for regulating AI, particularly regarding privacy and national security, remains underdeveloped. As more companies enter the market with advanced technologies, policymakers must grapple with how to balance innovation with public trust.
The outcome of this dispute could set a precedent that will shape future interactions between government agencies and tech firms. It underscores the need for clearer legal guidelines on AI use in surveillance activities—guidelines that can protect both citizens' privacy rights and ensure national security interests are met responsibly.
Recommended for you




