The Human in the Loop
On February 9th, GA! Think Tank presented an Expert Talk with HCSS Strategic Analyst Sofia Romansky on the use of artificial intelligence in the military domain and the subsequent consequences for international governance. Below is a report of the key ideas and points of discussion that Ms. Romansky presented.
The following report reflects an academic discussion and a series of ideas presented at GA’s Expert Talk. GA is a platform for academics, students, and young professionals to share their ideas regarding international relations and history. We do not take a position but seek to further platform intellectual debate and discussion.
Why AI?
Artificial intelligence is one of the most debated topics in international relations, yet the urgency of its military application has only recently come into focus. The year 2018 can be seen as revolutionary, as it marked the arrival of generative AI and large language models (LLMs) that could mimic human speech.
This software revolution coincided with a hardware breakthrough in logic chips, pushing computational power to the boundaries of physics. These developments have come into between the current Great Power Competition between the United States and China.
AI is now both the subject and the object of this rivalry: states use AI as a tool to gain the upper hand, but they also fight for dominance over the technology itself. Furthermore, as a technology, AI relies on hardware, which subsequently relies on raw materials needed for the development. Therefore, this competition necessitates access and abundance of these resources.
The Speed of War: The OODA Loop
The core military advantage of AI lies in its ability to accelerate the decision-making cycle, known as the OODA loop. While states and their militaries vary – all of them go through this loop before undertaking any meaningful operation.
- Observe: AI can be a part of intelligence and surveillance, which can process more data than any human ever could. Threats can be detected quicker, giving the state more time to react.
- Orient: The data gathered during the first phase can be analysed by AI so that an appropriate strategic/tactical/operational design is created.
- Decide: When decisions are being made, every army wants to avoid potential errors. AI tools can help with error detection and also give accurate predictions of the outcome of the decisions.
- Act: Finally, on the battlefield, AI can be an integral part of the operation, as manoeuvring, movement, and targeting can be delegated to a system that operates faster than humans.
The actor who cycles through this loop fastest wins. This is not music of a distant future, as AI is already being used to help with this loop. This can be seen in the War in Ukraine, where AI is used daily to enhance battlefield awareness, for target identification, and for autonomous drone capabilities.
This raises the issue of ‘moral disengagement.’ A software program does not suffer the psychological toll of a ‘tragic choice,’ such as a rocket strike that risks civilian casualties. If AI operates without control, the risks of these tragedies can increase.
Furthermore, because AI systems are trained on human data, they inherit human biases. These biases are created with the input itself – i.e., AI software is not critical towards the data it is given, and therefore outputs of AI are absolutely linked to the data that feeds the software.
A biased algorithm in a recruitment tool or a language model is a problem; a biased algorithm in a targeting system is a catastrophe in waiting.
Romansky also warned of the appropriate level of trust in the system. It is not clear how much trust is appropriate in relation to the outputs by these semi-autonomous systems. Low levels of trust may lead towards ignoring, whereas high levels of trust risk an ‘automation bias’ – i.e., assuming the computer is always right and relying solely on its judgement.
The International Governance and AI
The question of AI in the military domain has not gone unnoticed. Several international initiatives aiming to mitigate the threats concerning AI exist. One of these is GC REAIM (Global Commission on Responsible AI in the Military Domain), in which Romansky is a project coordinator. Military initiatives concerned with AI are numerous; nevertheless, their membership is fractured in smaller organizations.
Challenges
All of these projects face several problems linked to AI created by its technological nature.
Currently, AI is an overarching term that encompasses both civilian and military applications. Therefore, legally, it is nearly impossible to see where AI needs to be regulated, as it might be militarised.
AI is being developed by many actors with varying interests and values – such as the private sector, military, or academia. This diversity in development creates problems in the creation of a greater regulatory framework.
One of the most fascinating problems is the AI Power Paradox. Technology is developing significantly faster than the policies meant to regulate it. Moreover, states are not willing to join in on the efforts, as they fear losing their competitive advantage.
Norms
As these challenges create problems in the implementation of international guidelines. Several norms are proposed by the international initiatives.
AI needs to be constrained by international law in the same way that humans are. Furthermore, all responsibility for the actions still needs to remain with the commanders using the systems. After all, AI is only a tool, not an autonomous actor. Therefore, if a war crime were to happen with the use of AI, the software should not be to blame; the operator should.
The AI system needs to be reliable and its decisions traceable. They need to act predictably and need to show why they make certain decisions so no ambiguity exists. If AI systems act unpredictably, there needs to be a clear control over them in order for them to be immediately switched off.
Finally, states need to cooperate and share their information regarding AI usage and development. By creating an open atmosphere, the risk of accidents or misunderstandings is lowered.
A historical parallel to the nuclear hotlines and agreements can be made here. The danger of mutually assured destruction already once led the world to cooperate in mitigating these threats. If the new superpowers agree to continue this trend, then the risk of AI in the military sphere drops drastically.
The Geopolitical Triad: Why the EU Must Lead
The global competition in AI development is mainly between the US and China, but these two superpowers are less interested in adopting the international norms.
In the US, AI development is driven primarily by the private sector. The commercial imperative for profit and innovation lobbies against any meaningful regulation. On the other hand, while China regulates AI, it is still an autocratic state. They lack the transparency and democratic oversight for being the leaders in global norms around AI.
However, the EU is the only actor positioned to spearhead international governance. Because it is not beholden to Silicon Valley’s profit margins or an authoritarian regime, the EU can prioritize normative frameworks, ethics, and international law.
Conclusion
The world is not going to return to the times without AI, even if the current business bubble bursts. AI is already deeply ingrained in both our civilian and military societies. Nevertheless, it is only a tool, and if it is treated accordingly, humanity might profit and learn from it.
Understanding how AI is developed is a task that necessitates expertise from various fields – from computer engineering to international relations and others. Only then can the complexity of the systems be meaningfully tackled.
Current governance needs to be broadened and applied directly to how AI is used and developed. This is by no means an easy task, but clear guidelines can ensure that AI is going to be used in accordance with the current paradigms.
Lastly, given the accelerating development of AI, we need to look for new potential uses and threats this technology can bring.
We at GA! Think Tank would like to thank Ms. Romansky for her excellent lecture and for bringing more clarity to one of the most complex security challenges of our time.
About HCSS
The Hague Centre for Strategic Studies (HCSS) is an independent think tank providing analysis on geopolitical and security challenges.
Visit the HCSS website →





Leave a Reply