The World Confronts the Dilemma of Military Use of AI

The use of AI in the military raises some ethical dilemmas. For what purposes is it already being used? What is being done to regulate its use? Which companies stand out?
The use of AI in the military is a palpable reality. For example, the war in Ukraine is showing us that drones have become a precise, relatively cheap and deadly weapon of war.
This is just one of the applications of this technology in war scenarios, although there are many more.
However, before we jump into developing military applications of AI, we should pause for thought. This is what Google did a few years ago.
‘In 2018, after the controversy with Project Maven, a Pentagon programme that used Google AI to analyse drone imagery, the company announced that it would not develop AI for military applications,’ explains Hervé Lambert, Global Consumer Operations Manager at Panda Security.
But the company seems to have changed its stance, as just a few weeks ago it removed from its principles the explicit reference to the promise not to use AI in military applications, which it had introduced in 2018.
‘Google was self-imposing a limit on not using AI for military purposes. In particular, it prohibited itself from using technologies that cause general harm and from using AI for weapons whose primary purpose is to directly cause or facilitate injury to people. Its new version of ethical principles talks about implementing human oversight mechanisms to align with the principles of international law and human rights. We have gone from concrete facts to generic fine words,’ says Juan Ignacio Rouyet, professor at the School of Engineering and Technology of the International University of La Rioja (UNIR).
So what does this mean, is a new door inaugurated, and are we about to witness an AI-supported arms race?
‘It is still too early to say what the consequences of this decision will be. From Google’s side, all we know is that it has done so on the grounds that democratic countries need to lead the development of AI and collaborate with governments that share democratic values. Obviously, this decision has raised concerns among human rights activist groups and ethicists,’ says Josep Albors, director of research and awareness at ESET Spain.
‘This does not mean that Google is developing autonomous weapons, but it does indicate a change of approach, which could have consequences. Big tech companies, including Microsoft and Amazon, have signed contracts with the US Department of Defence, and the line between ‘civilian’ and ‘military’ in AI is becoming increasingly blurred,’ adds Lambert.
Likewise, Rouyet believes that ‘the door was already open’. ‘There is a huge market behind it, and everyone wants a piece. The US arms budget for 2025 is $310 billion, of which $17.2 billion is for science and technology,’ he says.
Military applications where AI is already being used
When we talk about AI, we must bear in mind that it is a concept that encompasses a wide variety of techniques and technologies, from machine learning and neural networks to natural language processing and computer vision, as Joaquín David Rodríguez Álvarez, Associate Professor of Administrative Law at the Autonomous University of Barcelona (UAB), explains.
‘If we stick to their current uses in the military field, we can see that there is a multiplicity of uses, ranging from the development of Lethal Autonomous Weapons Systems (LAWS) to Lethal Autonomous Weapons Systems (LAWS): Lethal Autonomous Weapons System), as well as semi-autonomous systems, and there are currently many systems that fall into this category, from the US Predator-type drones to Turkey’s STM Kargu-2 drone, as well as a long list of systems in the possession of a wide variety of countries,’ he specifies.
He also notes that some media outlets have reported on Israel’s use of AI software to plan targeted assassinations in Gaza and to bomb people identified as targets while they were in their homes.
The UNIR professor also points out that AI can be used in multiple applications. ‘Every aspect of defence that we can think of makes use of AI: facial recognition for target identification; machine learning for fighter jet assisted flight; data analytics for threat detection; swarm intelligence for joint drone flight; robot dogs to support infantry; neural networks for any unmanned system. AI in the military has been used for years for operational calculations in the rear. Now it has moved to the front line. In the not-so-distant future, machines will be fighting machines. I hope it won’t be us,’ he says.
Delving into that less visible side of AI in the combat field, Albors talks about the support of automated defence systems to respond to threats without the need for human need, the analysis of large amounts of data to identify potential risks that improve decision-making or its use in simulations that allow troops to be trained by replicating real environments with high fidelity.
On the other hand, he points out that ‘AI has also been contributing, for some time now, a lot of value to the cybersecurity section, related to military operations in cyberspace, both in its offensive and defensive aspects’. ‘This makes it easier to detect and block cyber-attacks more quickly and effectively, and easier to generate code that can then be used in cyberspace operations,’ he adds.
The Panda expert also notes that ‘it is being used in intelligence data analysis, threat detection, logistics optimisation and the development of autonomous systems, such as drones and unmanned combat vehicles’. And he points out that AI assists in strategic decision-making in warfare contexts and in the development of autonomous weapons, as well as playing a crucial role in electronic warfare and intelligence analysis.
‘The US, China, Russia and the EU have been investing in these technologies for years, with applications ranging from cyber defence to real-time decision-making on the battlefield,’ he concludes.
Ethical dilemmas of military use of AI
The use of this technology in the military presents considerable ethical dilemmas. ‘A robot may not harm a human being or, by inaction, allow a human being to be harmed,’ reads the first of Isaac Asimov’s three laws of robotics.
Thus, from the outset, any application of AI aimed at killing would clash head-on with the most basic principle of this law.
The UAB professor claims the inclusion of AI systems on the battlefield entails ‘the dehumanisation intrinsic to the delegation of lethal processes to non-human entities, capable of killing people without significant human control’. He even believes it could lead to ‘the facilitation and proliferation of extrajudicial killings’.
‘The idea of ‘killer robots’ is not science fiction. The UN has been discussing LAWS systems, which could select and attack targets without human supervision, for years,’ Lambert adds.
Rodríguez Álvarez also believes that it leads to the ‘breaking of the rules that regulate war’, such as International Humanitarian Law, the Geneva Conventions, etc. ‘Although it may not seem like it, war is highly regulated, with the fundamental objective of protecting the civilian population,’ he points out. But these rules are dynamic and contextual, which is why he believes that they are ‘highly difficult for AI systems to comply with’, as they are ‘weak in real-time contextual analysis’.
In addition, the Panda representative warns about the lack of transparency in military algorithms. ‘If an AI system makes a bad decision that results in civilian casualties, who is responsible? The developers? The military that used it? The software manufacturer?’ he notes.
‘The unbridgeable ethical boundary is that never should that responsibility lie outside a human being. In traditional warfare, that responsibility lies with the person who pulls the trigger. In modern warfare, that trigger is in the algorithm of a neural network, but there is still a ‘trigger’ and a person who decides when to pull it and when not to pull it, even if they are thousands of miles away in a machine learning lab,’ says Rouyet.
In addition, Lambert points out that ‘the risk of bias in AI is also a concern’ since ‘a poorly trained algorithm could misidentify a target and launch the wrong attack. Likewise, the UAB professor points out that the use of AI weakens human judgement, ‘due to inherent biases associated with the use of these systems’.
He also warns that the autonomy without communication allowed by some of the systems that use AI means a high risk of loss of control, as they are designed to fly at speeds or navigate at depths greater than our communication systems can withstand.
On the other hand, the ESET expert stresses that ‘we cannot forget that we are dealing with a system that can suffer cyber-attacks, allowing enemy actors to take control of those systems governed by an AI to use them against us’.
Finally, Lambert asks how security is balanced with the right to privacy. ‘The ability of AI to analyse large volumes of data and conduct mass surveillance can infringe on the privacy of individuals and groups, both in times of peace and conflict,’ he specifies.
How can we avoid the irresponsible use of AI?
If we don’t want the situation to get out of hand, we need to set limits. ‘There are numerous steps to take to try to avoid these conflicts involving the use of AI in military environments, starting with establishing appropriate and limited use of AI through international regulations. It would be a kind of update of the Geneva Convention adapted to today’s conflicts and the introduction of AI,’ says Albors.
‘Obviously, these regulations should be made taking into account ethical values when programming AI to be used in military environments, including experts from various fields and with the greatest possible international consensus so that no country or region has any advantage over the rest. Furthermore, this regulation should be done transparently so that it can be freely consulted and reviewed by independent bodies,’ he adds.
Likewise, the UNIR professor stresses that ‘the Geneva Conventions, the principles of war crimes or the convention on certain conventional weapons will have to be updated with this new technology’. All of this, without forgetting that ‘military power, which is exercised through governments, must never escape a certain degree of control’, which in democracies is exercised through parliaments.
However, we do not seem to be on the right track, as recent international conflicts are showing us that all these international agreements become a dead letter when war breaks out.
In addition, Panda warns that ‘the AI arms race is moving faster than international treaties’. ‘Groups like the Campaign to Stop Killer Robots are calling for a total ban on autonomous weapons systems, but major powers like the US and Russia oppose restrictions that would limit their military development,’ he says.
‘The least that should be done is to ensure human oversight of any lethal use of AI, establish standards of transparency in the development of these systems, and promote multilateral agreements that prohibit certain uses. But with competition between countries for technological supremacy, the viability of these measures is uncertain.
In fact, he points out that there is still no global treaty that specifically regulates the use of AI in the military sphere, although there are some initiatives. ‘In the UN framework, the Convention on Certain Conventional Weapons (CCW) has been discussing the issue since 2013, but has failed to establish binding restrictions. The European Union has pushed for AI regulations with ethical criteria, but their impact on the military is limited. China and the US have expressed support for certain ethical principles, but in practice both countries are investing huge resources in developing advanced military AI. In 2023, the White House published the Policy Statement on Responsible Military Use of AI, but without concrete commitments,’ he notes.
In addition, Albors points out that ‘there are several agreements that address this issue, including the NATO declaration on AI in 2021, the Political Declaration on the Responsible Military Use of AI during an international summit in The Hague in 2023, and the EU AI Regulation’.
Rouyer also notes that ‘the Future of Life Institute promulgated in 2017 the so-called Asilomar Principles of AI as an initiative to restrict its use, also considering the military domain’, although they are not binding principles and can be signed by anyone. And he notes that Elon Musk is among those who subscribe to them.
Finally, Rodríguez Álvarez points out that the latest initiative in this direction is the Paris Declaration on Maintaining Human Control in AI-based Weapon Systems.
Business on the front line
As we said before, the use of AI in the military sphere represents a succulent cake from which many companies want a piece of the pie.
‘In some cases, we are talking about companies that have been in the military industry for a long time and are now entering the race to boost their sales in this industry with the help and integration of AI,’ says the ESET expert.
For example, Lambert points out that ‘the defence AI sector is dominated by companies such as Palantir, which specialises in data analytics for military intelligence; Lockheed Martin, which develops autonomous systems and AI for fighter aircraft; Northrop Grumman, which works on autonomous drones and electronic warfare systems; BAE Systems, which develops AI for unmanned vehicles; and Anduril Industries, a startup focused on autonomous defence and surveillance with AI.
Albors adds other companies such as Raytheon Technologies -specialised in autonomous systems, predictive maintenance, cybersecurity solutions and missile guidance-, Helsing -real-time analysis of the battlefield, electronic warfare and cyber defence…- or Mistral AI -Vision-Language-Action (VLA) combining visual perception, language understanding and response automation in defence platforms-.
The UNIR professor also adds other companies to this list, such as L3Harris – communication systems, electronic warfare, surveillance and space solutions -, and General Dynamics – armoured vehicles, combat systems, submarines, aviation technology and cybersecurity,
In addition, it notes that companies such as Meta, OpenAI and Anthropic ‘put their AI systems to military use’. In particular, it focuses on Clearview, ‘whose facial recognition system is used by government security forces, which was surrounded by controversy when it trained its AI with public photos on social networks without any kind of control’, undermining the principle of privacy ‘for the sake of the principle of security’, it stresses.
And what about the arrival of Trump?
The emergence of Donald Trump on the international stage could energise a potential AI arms race. ‘Trump has launched the Stargate project with an investment of $500 billion in AI over five years. OpenAI, SoftBank, Microsoft, Nvidia, Arm and Oracle are involved. The 20th century was marked by the race to the moon. In this 21st century, the Moon is AI. The technological advances of that race have ended in the military and social fields, in that order. There is every reason to believe that this will also be the case now,’ says Rouyet.
Likewise, the head of Panda points out that the aim of these investments deployed by the Trump administration is to stand up to China in this AI arena as well. ‘In his first term, he launched the National AI Strategy and promoted investment in defence technology. Now, with his second term in office, the Pentagon’s AI budget is likely to skyrocket, benefiting large military contractors and technology companies with AI projects‘.
But China will not be far behind. ‘It has integrated AI into its military strategy with a ‘civil-military fusion’ approach, where technology developed by private companies such as Huawei or Baidu is also used in defence. In 2017, Pekin announced its goal to lead the world in AI by 2030. And its investment in military AI is opaque, but massive,’ he warns.