First, the destruction of all Venezuelan radars in the operation to capture Nicolás Maduro. Shortly after, thousands of coordinated attacks on Iran. Before that, the invasion of Gaza had marked the beginning of a technological shift in warfare. If the conflict in Ukraine will go down in military tactics history for the discovery of cheap drones as weapons of war, the war with Iran threatens to do so for the first major deployment of orchestrated attacks with artificial intelligence.
A new era that comes with intense debate, covering everything from the right to ethics to the future economy of conflicts. The main catalyst has been Dario Amodei, founder of the AI giant Anthropic, who is at odds with the Trump administration for refusing to remove the safeguards that their language models have for certain uses. Specifically, the autonomous use of weapons without a human behind them and mass surveillance.
"Anthropic understands that the Department of Defense, and not private companies, make military decisions. We have never objected to specific operations (...) However, in some cases, we believe that AI can undermine, rather than defend, democratic values," the company's CEO stated in a press release.
Geopolitical senior analyst at Arcano Research and former CIA officer Bjorn Beam emphasizes the profound paradigm shift we are experiencing with a technology that allows the capture of information from thousands of sensors and analyzes it in a matter of seconds. A process that used to take hours and boosts the decision-making capacity of armies, combined with the impact that the widespread use of autonomous vehicles is already having.
"In Ukraine, drones were used for the first time to search for different possible targets to attack with AI," explains the expert, who points out the other two major uses of AI in new conflicts, such as data collection and analysis and its application to intelligence for cyberattacks. "Decisions that used to take days are now made in minutes," highlights the former U.S. intelligence service member.
The fact that these situations do not involve a human decision-maker is a central issue for many ethical experts and cannot be detached from reality: autonomous weapons are already on the battlefield. "I believe that human intervention must be present. I do not envision a battlefield devoid of humans where artificial intelligence decides who to attack, that has significant ethical implications," emphasizes the retired brigadier general and professor at the University of Navarra Salvador Sánchez Tapia.
The CEO of Anthropic assumes that there are and will be autonomous weapons in the future, but he warned in his statement that the technology was not yet ready to make key decisions with enough certainty, such as being part of anti-aircraft defenses and diverting a missile.
"The CEO of Anthropic himself states that different guardrails are needed for defense or for a normal model, but there have to be some. Now the decisions of unit leaders and the government are made based on data from these companies, and if the companies change, so do these decisions. That's why I also understand why the government wants some control," Beam remarks on a conflict that, for now, has put the AI company on a blacklist for government contracts.
The European Union is also not immune to this situation and is already working on its own scenarios that necessarily involve the use of systems similar to those in the United States. "The ethical limits of a model for normal use are not the same as in times of war," explains Idoia Salazar, president of the Observatory of the Social and Ethical Impact of Artificial Intelligence, OdiseIA.
The jurist explains that exceptional situations like the pandemic have already empowered governments to take exceptional measures, and conflicts are another situation where it is "logical" for the use of such powerful tools as language models to change and be leveraged.
However, she does believe that this framework must have its own ethics and respect for rights, and in fact, she points out that the Commission is already working on how to incorporate an ethical layer into the AI algorithms that assist in decision-making in their armies and other elements such as autonomous vehicles, crucial in the evolution of new battlefields.
This new paradigm has clear winners in the technology companies, both established ones like Microsoft or Google, which also provide numerous capabilities to the State, as well as others in adjacent sectors, from the world of satellites to startups that have become giants thanks in large part to military contracts, such as Anduril and Palantir. The latter has made a name for itself amid eccentric statements from its CEO and a veil of secrecy around its data analysis technologies, now valued at $370 billion in the stock market, 244 times its profits. Precisely, the use of Palantir's technology combined with large language models has been the basis for decision-making on U.S. targets in their recent operations.
The structure where these cutting-edge technologies are grouped is the Maven project, a kind of command post that combines information collected from the battlefield by sensors, drone cameras, collections from other cameras with cyberattacks, social networks, or satellite images so that the military have a better understanding of the situation.
Anduril, on the other hand, started with the development of autonomous drones but has also shifted towards other decision-making technologies and, lately, towards other major projects such as the connected soldier, a plan to equip U.S. military personnel with extended reality glasses to enhance their capabilities.
This is complemented by dozens of other actors, from satellite companies providing almost real-time images to the military to numerous manufacturers of new drones and autonomous vehicles, cybersecurity and cyber defense groups; providing extensive capabilities on all sides, as experts like Beam highlight Iranian development in the cyber area, as well as that of other countries like North Korea.
"There is also the sophistication of cyber defense, such as Iran using AI in its attacks to search, extract, and structure data from different countries and governments. Until February, they were inside a U.S. bank. Google has also reported that Gemini AI was the target of AI phishing by Iran," points out the former CIA officer, who also emphasizes that this "sophistication" extends to using AI agents constantly for continuous attacks.
Among the benefits touted about this transformation in the military world, two stand out: a reduction in casualties and lower economic costs. The paradigmatic example of the economic logic of the new war is the downing of Iranian Shahed drones, costing around $25,000, with Patriot interceptors launching costly missiles, however, the change is deeper, and the accounts are not so straightforward.
In this regard, the influential think tank Rand points out that AI will bring a significant shift in the economic aspect: from prioritizing quality to focusing on quantity. In other words, from a paradigm of super complex and technically miraculous multi-billion-dollar programs to a new war scenario where the emphasis is on deploying many cheaper and easily replaceable elements, such as drones or decoys to distract AI and prevent it from making decisions, for example.
All steps towards a scenario increasingly dominated by autonomous vehicles or elements like robots becoming more prevalent, as highlighted by Sánchez Tapia. The University of Navarra professor emphasizes that humans will always have "certain decisions," but the trend is moving towards "programmed swarms" with predefined missions and humans as the ultimate decision-makers.
A path that Europe is moving along more slowly. "We are a bit behind China, Israel, and the United States," he stresses, calling on the Union. "For our defense and artificial intelligence industry to be autonomous, it must be competitive, and from my point of view, it is not. Not only the sector itself but also the legislative environment. The regulations that need to support these industries. If they are restricted compared to other regions without limitations, ultimately people will seek suppliers outside the continent, which is what we seek to avoid," he reflects.
However, the future bill for this new war is not so clear. A report from the Brennan Center for Justice at New York University estimates that the U.S. government has already committed over $75 billion to its efforts to obtain better autonomous weapons. Among the allocations, a failed project of over $20 billion for Microsoft to develop the connected soldier glasses stands out, a project now managed by Anduril.
In its own study, Rand also warns of the massive implications that this new war has on the logistics of an invading army. On one hand, there is the need to transport hypothetical robots, drones, and other elements to the combat zone. A challenge that involves building new logistics, as many drones, unlike planes, are elements that have only one use and self-destruct, requiring replacements to be carried.
Part of these savings could be capitalized on by reducing personnel, as there would be fewer soldiers on the ground, according to Rand's analysis, which also points out that AI has the potential to help armies compensate for the lack of manpower in their cybersecurity areas and generate code with fewer vulnerabilities that rival attackers could exploit.
There is also another economic aspect that cuts across this transformation. Computing and storage are needed for everything to work. Researchers from the Brennan Center, Amos Toh and Emile Ayoub, tracked Pentagon contracts and other branches of the military: they have commitments for over $9 billion (¤7.766 billion) in cloud capacity with Amazon Web Services, Google, Microsoft, and Oracle. Another example would be Trump's pharaonic antimissile project, the golden dome, which would cost another $25 billion in development plus subsequent operation with high energy and personnel costs.
This initiative also raises another aspect, the conversion of data centers into strategic locations, as already seen with Iran's attacks on centers in the Emirates. "Iran has already put Amazon, Microsoft, Google, or Nvidia on its target list...", points out the geopolitics expert from Arcano Research, who highlights that another key element is the submarine cables. "90% of our communications go through these cables (...) Russia doubles its attacks on infrastructures in Europe every year: cables, radar centers, hospitals, energy, each attack is part of a destabilization war," describes the situation Beam, who indicates that we are heading towards a scenario of increased impact of these types of attacks in fields such as the use of AI to amplify misinformation on networks.
A situation amplified by the lack of moderation on these platforms and the ability to create increasingly realistic images with artificial intelligence, painting a scenario that leaves little doubt about the potential of AI as a weapon in the midst of escalating global uncertainty.
