Anthropic clash with Pentagon fuels government surveillance fears

By Miranda Nazzaro
Anthropic’s clash with the Pentagon is reigniting fears of government surveillance, as experts warn the capabilities of artificial intelligence, paired with the Trump administration’s sweeping data collections, pose new threats to individual privacy.
Just over a year after President Trump welcomed AI firms into government, theWhite House’s unprecedented reach for personal data has left some technology leaders at odds with the administration.
Anthropic and the Department of Defense (DOD) butted heads over the extent to which the company’s AI tools could be used to conduct surveillance and compile information about U.S. citizens and residents — a redline for the company’s CEO, Dario Amodei. The dispute cost Anthropic its government contract and spurred a legal battle over the company’s designation as a national security threat.
“Frontier AI fundamentally changes the surveillance calculus,” David Bader, a professor at the New Jersey Institute of Technology, told The Hill. “Analyzing billions of data points to build profiles on millions of Americans used to be computationally impractical, but now it’s trivia with AI, and the law hasn’t caught up to that reality.”
From the start of negotiations, Amodei said AI-driven mass surveillance is “incompatible” with democratic values, warning it presents “serious, novel risks to our fundamental liberties.”
Anthropic, which worked with the Pentagon as a subcontractor of data analytics firm Palantir since 2024, pressed for specific restrictions on mass domestic surveillance, with the company suggesting some users are “outside the bounds” of what current technology can “safely and reliability.”
The DOD insisted on using an “all lawful purposes” standard and leaders alleged Anthropic sought to “personally control” the U.S. military and jeopardize national security. Failing to come to an agreement, President Trump ordered federal agencies to stop using Anthropic products and Defense Secretary Pete Hegseth issued a rare supply chain risk designation for the company.
Oliver Stephenson, the associate director for AI and Emerging Technology Policy at the think tank Federation of American Scientists, explained that the data collected by the government can be inputted into AI tools and produce “incredibly detailed inferences about people.” He pointed to recent research showing how large language models can be used to identify the authors of purportedly anonymous online posts, “matching what would take hours for a dedicated human-investigator.”
“It’s not just data that’s showing anonymous patterns of life,” Stephenson added, “We have transitioned from a world in which the limitation used to be on collection, and is now on analysis capabilities.”
Today, the government can buy bulks of publicly available data on individuals from commercial data brokers. The information available spans location histories, purchase history, demographic details and more. Civil liberties groups have long argued the practice violates the Fourth Amendment, while the government has argued in the past that it does not break any laws.
Much of the publicly reported surveillance has often involved intelligence agencies like the National Security Agency, and Department of Homeland Security, not necessarily the broader Pentagon.
Emil Michael, undersecretary of Defense for research and engineering, pushed back on these concerns regarding domestic surveillance, telling the tech publication Pirate Wires that the Pentagon is “not the FBI” or “DHS”
“The notion that we would get painted as wanting to do that is crazy,” Michael reportedly said, “We don’t want censorship, we don’t want people’s privacy intruded.”
Michael’s comments allude to the worst-case scenario fears the public often holds of so-called government “spying.”
OpenAI, the maker of the popular ChatGPT chatbot, reached a deal with the Pentagon just hours after the agency’s talks with Anthropic fell apart earlier this month.
OpenAI faced immediate backlash and days later, CEO Sam Altman announced an amended version of the deal with new assurances, including a declaration that “AI systems shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” This included a stipulation that OpenAI’s services will not be used by the department’s intelligence agencies, including NSA.
Altman was flooded with requests on social media to share the exact terms of the contract as a way to “regain” the trust they saw was broken after OpenAI’s fast deal.
“What we’re seeing right now is the culmination of decades of cultivated mistrust on the part of both Big Tech and the government,” Matthew Guariglia, a senior policy analyst at the Electronic Frontier Foundation, told The Hill, adding, “In a more perfect world, you have entities that you trust with your data.”
The wavering trust of the government comes as the White House faces separate scrutiny over how agencies share personal data about U.S. residents and deploy new technologies to carry out its immigration enforcement efforts.
Much of this data information swapping has occurred with Immigration and Customs Enforcement, which is receiving data on taxpayers from Internal Revenue Services, Medicaid data from the Department of Health and Human Services and passenger information from the Transportation Security Administration. At Customs and Border Protection, the agency purchased data from the online advertising industry, 404 Media reported earlier this month.
Guariglia, whose research focuses on surveillance policing, noted the full breadth of how technology could be used for surveillance is not yet known.
“People need to look at the laying of infrastructure…use cases tend to come later, but what we need to see is what the government is actually physically capable of right now,” he said. “And they are more than capable of scraping the entire internet, siphoning data from many different branches of government [and] buying commercially available data from data brokers.”
Owen Daniels, associate director of analysis and Andrew W. Marshall fellow at Georgetown University’s Center for Security and Emerging Technology, argued it “seems unlikely” the Pentagon is looking to use Anthropic for mass surveillance given restrictions in U.S. law.
Still, Daniels said the situation speaks to a larger concern over the ability of AI tools to aggregate from data, “whether for intelligence collection abroad or to build profiles of consumers at home.”
The saga between Anthropic and the Pentagon is renewing calls for Congress to do more about government surveillance and AI regulation as a whole.
“This isn’t just about whether you trust today’s military leadership, it’s about building durable technical and legal frameworks for AI,” Bader said. “Capabilities will only grow more powerful.”
While the Trump administration has cast the issue as political, Bader said there is a legitimate policy debate to be had over how AI should be deployed and where responsibility sits for ensuring legal and responsible use.
Amodei similarly said he believes it is “Congress’s job” to address how AI is changing the possibilities of domestic mass surveillance.
“The judicial interpretation of the Fourth Amendment has not caught up. Or the laws passed by Congress have not caught up,” Amodei told CBS News earlier this month. “So in the long run, we think Congress should catch up with where the technology is going.”
Most major attempts to regulate or add guardrails to AI have made little progress in Congress, let alone on surveillance-specific bills.
Guariglia pointed out the House overwhelmingly passed the Fourth Amendment is Not For Sale Act in 2024, but it did not move in the Senate. The bipartisan bill would limit how the government can purchase data from third parties and require law enforcement and other government entities to get a warrant before buying information from third-party data brokers.
“The Senate could take that up tomorrow and close one of the biggest loopholes that’s allowing warrantless surveillance in the United States,” Guariglia said.
Anthropic filed two lawsuits in federal courts Monday over the supply chain risk designation, warning of the “enormous” consequences of the case. The AI firm alleged the federal government “relegated” against the firm for its protected viewpoint.”
https://thehill.com/policy/technology/5775732-anthropic-pentagon-ai-surveillance-clash/