NEWS
NEWS

Anthropic denounces Pentagon pressure to facilitate mass surveillance

Updated

The company's CEO, after many rumors, confirms that the Trump administration threatens to designate them as a "risk," a label reserved for US adversaries never before applied to a US company

Pages from the Anthropic website and the company's logos.
Pages from the Anthropic website and the company's logos.AP

A very serious accusation, the most serious to date in the world of artificial intelligence, defense, and espionage. After days of rumors and partial leaks, the CEO of Anthropic, one of the leading firms in Artificial Intelligence and with the most contracts and access with the US Department of Defense, confirmed on Thursday in a public statement what was an open secret: the Trump administration, through its secretary Pete Hegseth, is pressuring and threatening the company to remove all existing safeguards that prevent the use of its technology for mass surveillance of American citizens and the creation of autonomous armed drones.

The Pentagon, according to the company's complaint, warns that there will be enormous consequences for the creators of the chatbot Claude, and that laws will be invoked to achieve their goals by force, but Anthropic asserts that they will not comply. "These threats do not change our stance: we cannot, in good conscience, comply with their request," they affirm.

"It is a shame that Dario Amodei is a liar and has a god complex. He only wants to personally control the US Armed Forces and does not mind jeopardizing our nation's security. The Department of Defense will ALWAYS adhere to the law but will not bow to the whims of any for-profit tech company," reacted immediately Defense Undersecretary Emil Michael.

The tension between Anthropic and the Pentagon has escalated this week, including a forced visit by the executive and his team to headquarters. During the meeting, Hegseth gave him a 48-hour deadline, until this Friday, to capitulate and give full access to their technology. Dario Amodei has repeatedly expressed his ethical reservations about the indiscriminate use of AI by the government, not only regarding surveillance but also the idea of armed and fully autonomous drones. But according to his version, the Pentagon told him that either they granted full access or not only would they expel the firm from their systems, but they would designate it as a "supply chain risk," an unprecedented label for US firms, or invoke the Defense Production Act to essentially gain access by force.

"I deeply believe in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries. That is why Anthropic has proactively worked to implement our models in the Department of Defense and the intelligence community. We were the first cutting-edge AI company to implement our models in the US government's classified networks, the first to implement them in National Laboratories, and the first to provide customized models for national security clients. Claude is widely used in the Department of Defense and other national security agencies for critical mission applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more. Anthropic has also acted to defend US leadership in AI, even when it goes against the company's short-term interests. We decided to forego hundreds of millions of dollars in revenue to prevent the use of Claude by companies linked to the Chinese Communist Party, dismantled CCP-sponsored cyberattacks attempting to abuse Claude, and advocated for strict export controls on chips to ensure a democratic advantage," explains the executive in a statement released this afternoon, Washington time.

Anthropic reiterates that it is not their role to make military decisions and that they have never objected "to specific military operations nor have they tried to limit the use of their technology on an ad hoc basis," but they have reached a limit and there is a line they do not want to cross. The complaint aims to pressure Congress and public opinion to make the Pentagon relent, but it will be challenging.

"In a limited number of cases, we believe that AI can undermine, rather than defend, democratic values. Some uses are also simply beyond the bounds of what current technology can safely and reliably do. Two of these use cases have never been included in our contracts with the Department of Defense, and we believe they should not be included," says Amodei precisely in the case of surveillance and drones.

"We support the use of AI for legal foreign intelligence and counterintelligence missions. However, using these systems for mass national surveillance is incompatible with democratic values. AI-driven mass surveillance poses serious and novel risks to our fundamental freedoms. If such surveillance is currently legal, it is solely because the law has not yet adapted to the increasing capabilities of AI. For example, under current legislation, the Government can acquire detailed records of Americans' movements, web browsing, and associations from public sources without obtaining a court order, a practice that, as acknowledged by the Intelligence Community, raises privacy concerns and has generated bipartisan opposition in Congress. Powerful AI allows for the assembly of these scattered and individually innocuous data into a complete picture of anyone's life, automatically and on a large scale," says the statement in the clearest denunciation by a key player with extensive access to military information and resources.

"Partially autonomous weapons, like those currently used in Ukraine, are vital for defending democracy. Even fully autonomous weapons (those that completely eliminate humans from the process and automate target selection and attack) can be crucial for our national defense. However, cutting-edge AI systems are simply not reliable enough today to power fully autonomous weapons. We will not knowingly provide a product that puts American fighters and civilians at risk. We have offered to collaborate directly with the Department of Defense in R&D to improve the reliability of these systems, but they have not accepted the offer. Additionally, without proper oversight, fully autonomous weapons cannot be trusted to exercise the critical judgment that our professional and highly trained troops demonstrate daily. They must be deployed with the necessary safeguards, which currently do not exist," says the statement.

Both complaints are very serious and show the risk to both civil rights and individual security. In their statement, which will cause a huge stir because it is no longer just leaks from the past few days but a total confirmation, Anthropic ensures that "the Department of Defense has stated that it will only hire AI companies that accept 'any legal use' and remove the safeguards in the mentioned cases. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us as a 'supply chain risk' (a label reserved for US adversaries, never before applied to a US company) and invoke the Defense Production Act to force the removal of the safeguards. These last two threats are inherently contradictory: one labels us as a security risk; the other, as essential to national security. But in any case, these threats do not change our stance: we cannot, in good conscience, comply with their request," they conclude.

"The Department of Defense has no interest in using AI for mass surveillance of Americans (which is illegal), nor do we want to use AI to develop autonomous weapons that operate without human intervention. This narrative is false and is being spread by leftists in the media. This is what we ask: allow the Pentagon to use Anthropic's model for all legal purposes. This is a simple and sensible request that will prevent Anthropic from endangering critical military operations and potentially putting our fighters at risk. We will not allow any company to dictate the terms of our operational decisions. You have until 5:01 p.m. (Eastern time) on Friday to decide. Otherwise, we will terminate our collaboration with Anthropic and consider it a supply chain risk for DOW," the Pentagon spokesperson had warned on Thursday morning.

Although the bridges seem to be burned, the company tries at the end of its statement to extend an olive branch. "It is the prerogative of the Department to select contractors that best fit its vision. However, given the substantial value that Anthropic's technology brings to our armed forces, we hope they reconsider. Our strong preference is to continue serving the Department and our fighters, with the two requested safeguards in place. If the Department decides to disengage from Anthropic, we will work to facilitate a smooth transition to another provider, avoiding any disruption in military planning, operations, or other critical ongoing missions. Our models will be available under the broad conditions we have proposed for as long as necessary," the text concludes.