Slide
Slide
Slide
Slide

NATO’s Artificial Intelligence Push and its Military Implications

Opinion-1-1.jpg

The vulnerabilities in the AI systems can include data poisoning and input attacks, attacking the supply pipelines by simply crafting data and feeding it to public resources, white-box and black-box attacks

Sanur Sharma

The technological advancements in Artificial Intelligence (AI), machine learning, big data analytics, robotics, quantum computing and virtual reality have led to the rise in use of autonomous systems in military applications. This is changing the face of the battlefield by enabling new forms of military functions, over and above the conventional systems, thus enabling the execution of higher coercive actions. The North Atlantic Treaty Organization (NATO) countries are also adopting such emerging technologies to maintain their strategic advantage and to mitigate transnational threats.

Russia’s offensive cyber hostilities and China’s military adoption of AI for augmenting its high-tech warfare mechanisms have emerged as the contributing factors for NATO to upscale its technological efforts in Emerging and Disruptive Technologies (EDTs). NATO is making ambitious investments in EDTs to ensure interoperability and standardization among member states.

This Issue Brief takes stock of the current strategic surge by NATO in AI adoption and its ongoing efforts to exploit EDTs for defense innovation and adoption. It discusses the role of AI in contemporary conflicts, specifically NATO’s response to the Russia–Ukraine conflict, and explores the vulnerabilities in the AI systems as well as the challenges and limitations in AI adoption by NATO.

NATO’s Technological Push

The US National Security Commission Report of 2021 states that China is leapfrogging to new technologies by investing in intelligentsia warfare like swarm drones and using AI for reconnaissance, electromagnetic countermeasures and coordinated firepower strikes. The US is jointly working with its allies on the policy implications of such new technology. It is also partnering with countries like Canada, Denmark, Estonia, the UK, France and Norway, to work on military standards on AI.

In October 2021, NATO formally adopted the first AI strategy on the responsible military use of AI with three core tasks: collective defense, crisis management and cooperative security. NATO’s strategy aims to accelerate the uptake of AI for military systems. The six principles of the NATO’s AI strategy include: Lawfulness, Responsibility and Accountability, Explainability and Traceability, Reliability, Governability and Bias Mitigation. This strategy aims to protect, monitor and innovate AI and related disruptive technologies in a phased manner to establish political support for AI military projects.

In the NATO Summit held at Brussels in 2021, as a part of the NATO 2030 Agenda, NATO’s new Defense Innovation Accelerator for the North Atlantic (DIANA) was launched. It aims to maintain NATO’s technological edge compared to nations like China and Russia, which are challenging the West with their accelerated investments to build technological capacity and use offensive subversive measures.

AI has been a contributing agent in weaponizing cyberspace and augmenting cyberwarfare to the next level in modern battlefield scenarios. While some of its uses such as in scaling of data analytics, data fusion, deep fakes, cyber defense have matured, its use in autonomous weapon systems and other complex operational applications are at a nascent stage.

AI has been aggressively used to spread disinformation in the Russia–Ukraine War. Machine learning algorithms have been used to amplify misleading and fake content on social media platforms, like doctored videos of invading forces and fake live streams. Russia is said to have used AI-enabled systems not only on the battlefield but also in cyberspace, targeting the critical infrastructures of Ukraine. Russian troll farms have been alleged to have used AI-enabled systems to generate human faces for fake propagandist personas on social media platforms like Twitter, Instagram and Facebook.

Similarly, Ukraine has been given free access to Clearview AI facial recognition software, which has a database of 2 billion photos crawled from Russian social media platforms. This software is being used for the detection of Russian forces and to identify the dead and gauge the spread of disinformation in cyberspace. AI’s analytical potential has been tapped by companies even before the Russia–Ukraine war started.

The Russia–Ukraine conflict has become a test case for AI adoption in modern warfare. The US is using the conflict as a test-bed for many of its AI projects with the Pentagon’s ‘Maven’ project having contributed to the detection and classification of objects of interest from various drone footage through AI and Machine Learning (ML) algorithms. It has been reported that the Pentagon has been using AI and ML tools to collect a vast amount of data on the Russia–Ukraine war and analyses it to learn and generate battlefield intelligence about the Russian command and control strategies.

The advanced AI-enabled systems with the US Department of Defense (DoD) are said to have been used for overseeing the battlefield and collecting and archiving signals intelligence. It was stated at the Defense One’s Genius AI Summit in April 2022 that all this information will be fed into systems for training of machine learning algorithms to support future decision-making processes. It is believed that the US and NATO allies have already built such AI-enabled cyber weapons and defenses, information about which is said to be highly classified.

The vulnerabilities in the AI systems can include data poisoning and input attacks, attacking the supply pipelines by simply crafting data and feeding it to public resources, white-box and black-box attacks. There is always a chance of orchestrated and conflicting data in the face of AI models to derail them and to exploit the vulnerabilities in the algorithms, and active manipulation by the adversaries can be induced.

Defense Advanced Research Projects Agency (DARPA) has launched a Guaranteeing AI Robustness against Deception (GARD) programme. Under this programme, development efforts are being made to establish a theoretical foundation for defensible ML and the creation and testing of such systems.

AI technology in modern warfare will be an intractable weapon in future conflicts beyond Ukraine. Countries trying to achieve a technological edge over others have started considerable investments in AI technology to strengthen their militaries. NATO has invested US$ 1 billion to develop new AI defense technologies. The US DoD has also planned to invest US$ 874 million in AI-related technologies as a part of their army research and development budget (federal fiscal year 2022 DoD budget). The UK DoD is funding suppliers to work with Defense Science & Technology Lab (Dstl) on AI projects which were £7million for the year 2021/22 and is supposed to increase to £29 million in the next year.

The influence of AI on NATO comes with a set of opportunities, challenges and risks. Its adoption process has been incremental and prescriptive. The rising geopolitical conflicts and the use of AI in such conflicts have required the establishment of a dynamic ecosystem to support interoperability. The military adoption of AI requires an innovation ecosystem that is self-sufficient, supports deterrence and resilience, and encompasses the strategic innovation process.

NATO’s AI strategy raises many concerns related to the AI-driven autonomous weapon systems, as it does not adequately address the development of such systems, its deployment and governance.

Another challenge for NATO is to standardize rules for all member states in dealing with AI-enabled autonomous weapon systems. Countries like Turkey are working on autonomous weapons and have developed AI-enabled loitering munitions. Turkey has requested the US for upgraded F-16 fighter jets that are said to be AI-enabled.

NATO will also have to focus on the vulnerabilities and intrusion issues with the AI-enabled systems and will need to set up dedicated centers for AI development and testing in order to maintain a test-safety regime for systems-of-systems employed using AI. The challenges related to AI use in wars and geopolitical conflicts need to be addressed to generate confidence in the use of such systems. Additionally, testing mechanisms and accuracy standards need to be implemented for system components. Policymakers need to address the operational risks and ethical considerations of employing AI in military systems.

In future, AI will act as an enabler to out-adapt competitors and adversaries. The current AI strategy of NATO needs to address the vulnerabilities in AI systems and related measures for effectively using autonomous weapon systems and military governance of AI. The NATO accelerator has been devised to address, prioritise, and promote interoperability in transatlantic cooperation to drive the strategic innovation process.

Furthermore, NATO needs to protect the use of AI from manipulation and disruption and align it with its stated principle of “Responsible use of AI”. NATO needs to work on AI adoption challenges centred on innovation and arms control. It can look towards bringing in guiding principles on use of AI-driven lethal autonomous weapon systems. It is expected that in the next 2–3 years, AI’s use will be confined to the field of military logistics, reconnaissance, mission planning and support, predictive maintenance of a military facility, data fusion and analysis, cyber defense and optimization of processes. In the long run, NATO could employ AI for more complex military applications as it generates greater political support for offensive AI military projects.

Dr Sanur Sharma is Associate Fellow at Manohar Parrikar Institute for Defense Studies and Analyses

Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.

This is the abridged version of the article which appeared first in the Comment section of the website (www.idsa.in) of Manohar Parrikar Institute for Defense Studies and Analyses, New Delhi on May 24, 2022

Leave a Reply

Your email address will not be published. Required fields are marked *

scroll to top