The idea of the immutability of the nature of war, as formulated by Clausewitz, is an article of faith that is constantly put to trial. The latest development in human history that can potentially change the nature of war is Artificial Intelligence (AI). In a recent article published in this magazine, Alloui-Cros argued that the nature of war will not change.[i] He based this conclusion on three points: AI is just a tool that compresses timeframes but is unable to make complex decisions, AI has human biases and is designed to solve human problems, and war is a human activity, and we will always have a choice to determine its course. Looking at the AIs that are currently available, he is probably correct and AI will not change what war is or break the trinity of passion, chance and policy that defines its nature. His conclusions are aligned with those of other scholars that discussed how military revolutions changed war. For example, Gray concluded that ‘some confused theorists would have us believe that war can change its nature’.[ii] Echevarria investigated the relation between RMA, globalisation, and the nature of war and concluded that, although it is changing, the Clausewitzian framework remains ‘more suitable for understanding the nature of war in today’s global environment than any of the alternatives’.[iii]
On one hand, Alloui-Cros’ article has merits because it recognized that Clausewitz’s theory of war is still the point of reference for any discussion and updated to AI past conclusions on the effects of technological revolutions on the nature of war.
On the other hand, he did not consider if an AI with human-like capabilities, so-called Artificial General Intelligence (AGI), of whose capabilities far surpass human comprehension can falsify this theory. Vinge called this AI a ‘singularity’, a mathematical term used to label a point where a function degenerates and changes its nature becoming qualitatively different from what was before. Vinge concluded that ‘it is a point where our models must be discarded and a new reality rules’.[iv] An AGI that far surpasses human capabilities is thus called a singularity because, once it appears, the past will not be a guide to forecast or understand the future. Some authors portrayed this possibility as the end of the world.[v] The implicit conclusion is that it is not worth studying what comes after because the AGI singularity will annihilate us. This position is disputable because if we have no way to know how this new reality will be, then it is impossible and equally useless to conclude that the singularity will destroy instead of saveing us. Furthermore, as Vinge argued in his seminal paper, as time passes, we should see the symptoms of the singularity advent.[vi] Hence it is worth studying how the nature of war will be altered by this new evolving reality. Alloui-Cros answered the question on AI and the nature of war for the reality we know. The purpose of this article is to add to this discussion by speculating what might happen to the nature of war when we approach the AGI singularity.
This essay is divided into three parts. Firstly, it will present the two conditions needed for an AGI to become a singularity: super-intelligence and consciousness. Secondly, it will try to answer if AI super-intelligence and consciousness could change Clausewitz’s definition of war. Thirdly, once we establish that war is still organised violence for political aims, it will describe how AI super-intelligence and consciousness might influence Clausewitz’s trinity of violence, chance, and politics. The conclusion is that the AI super-intelligence and consciousness have the potential to change the nature of war.
What is Artificial Intelligence?
AI researcher Micah Clark wrote that on ‘a very personal and philosophical level, AI has been about building persons, is about “personhood”’.[vii] Current AIs are far from achieving personhood and can be better understood as highly optimised algorithms to solve narrow tasks but are poor at transferring these skills to new ones.[viii] Researchers are even in disagreement about whether a synthetic, conscious intelligence capable of performing humanly relevant complex cognitive tasks will ever emerge and eventually surpass human capabilities.[ix] Nonetheless, super-intelligence and consciousness are two steps that, if ever reached, could change war and its nature.
There is no consensus on the essence of human intelligence and even less so on super-intelligence.[x] It is still possible to adopt a working definition like the one proposed by Bostrom: ‘any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest’ is super-intelligent.[xi] This can materialise as comparable human intellect capability but multiple orders of magnitude faster, or vastly more intelligent, or a combination.[xii] Initially, it would be a ‘seed’ AI capable of building a slightly better version of itself through recursive self-improvement.[xiii] AI researchers think that with sufficient skill at intelligence amplification, the system could develop new cognitive modules as needed, including empathy, strategic thought and political acumen.[xiv]
Social psychologists, however, have recognized that the mind, as something associated with a single organism, is an approximation of intelligence. In reality, the mind is social, and it exists inside social and cultural systems.[xv] Artificial Life (ALife) research can give us insights into how machines can organise societies with rules for trade and fight and act as social intelligence. ALife envisions the possibility of a society of AI that leads to their superior intelligence.[xvi]
An AGI might develop consciousness as a tool to optimise its overall reward function and might have characteristics significantly different from that of humans.[xvii] Philosophers and researchers disagree on what consciousness is and whether self-consciousness is necessary or just a particular sort of phenomenal consciousness.[xviii] In particular, the lack of bodily experience and biological motivations would realise a clear cartesian dualism of body-mind that would question at its core the ability of AI to distinguish itself from the rest, care about itself and express intentionality.[xix]
The evolution of AI is not completely predictable, but we can expect increasing intelligence and some level of autonomy approaching consciousness to develop. We can explore its impact on war through these concepts.
Is it war?
Clausewitz’s definition of war
The first question to answer is if war fought with and by AGIs is still war or a different type of interaction. In ‘On war’, Clausewitz introduces the concept (Begriff) of war as ‘an act of violence (Gewalt) to force an opponent to fulfil our will’.[xx]
This definition comprises three elements: a) the violence, b) the purpose, and c) the social element.
- For Clausewitz, the results of the application of violence are ‘bloodsheds’,[xxi] and the reciprocal element of war gives violence an escalatory quality without theoretical limits to its application.[xxii]
- On the other hand, escalation is a potential outcome rather than a necessary one because the rational decisions of human beings should determine it.[xxiii] Military aims (Ziel) are thus constrained and judged in relation to the political purpose of the war (Zweck) and are only a component of the overall means (Mittel) available.[xxiv]
- War is a relation between communities willing to resist and realise their political aims. It is a function of ‘coalitionary aggression’ and must happen between organised groups with a shared understanding of reality.[xxv]
a. Violence and AGIs
Handel highlights that for Clausewitz, victory without violence is an aberration in the history of warfare.[xxvi] In theory, it can be achieved by two methods, through manoeuvre,[xxvii] or as ‘war by algebra’, a clash resolved by comparing figures of each other’s strengths.[xxviii] The Prussian general believed the first ineffective and the second impossible because of passion. By contrast, an AI commander might act as a perfectly rational entity and realise the ‘war by algebra’. However, there are different combinations of this situation that are worth mentioning. If the AGI is under human control, the AGI evaluation might be overrun by a passionate human commander. Similarly, for the reciprocal nature of war, if the opponent is a human agent, the AGI might be forced to use violence to react to non-rational decisions. Conversely, if it faces another purely rational entity, or Huntington’s Civil-Military relations concept remains valid, even when AGI is in charge of military operations, then an AGI commander might calculate that a battle or a war should not occur. Paradoxically, AGIs commanders might agree that the most efficient way to resolve a battle is to calculate the likely outcome and destroy their own resources based on this shared conclusion.[xxix] They would maintain valid the ‘dominance of the destructive principle’,[xxx] but would morph war and make explicit that it is an act of self-violence.
b. Purpose and AGIs
There must be a rational purpose for a conscious, and thus intentional, AGI to resort to war and violence or self-violence. If the AI does not have a freely chosen purpose and acts violently, if it goes ‘rogue’, then it is not war: it is an unnatural disaster. At the same time, it is unclear what a rational purpose would be for an AGI. Humans have biological motivations and emotions that connect these needs to our behaviours.[xxxi] It is unclear if an AI would have motivations or if some non-human motivations will emerge during their evolution. Minsky suggested that free will develops from a ‘strong primitive defense mechanism’ to resist or deny compulsion.[xxxii] If this is true, we can at least assume that a conscious AI will try to defend itself. Unfortunately, it does not clarify if an AGI will understand human motivations and how much value it will give to itself in relation to the rest of reality.
c. Social element and AGIs
An additional element to consider is that humans and AGIs might have different perceptions about what constitutes a violent act and its severity. Moreover, as humans, we might not be able to understand the thought processes of a super-intelligent being. This incomprehension of aims and means undermines the definition of war as a social institution: we do not wage wars on apes or cats, and similarly, AGIs will not have wars with us.[xxxiii] Interestingly, if AGIs develop their own society with norms and shared understandings, as ALife suggests, it means that they could potentially have AGI social wars waged for AGIs social motivations.
Overall, AGIs might not be interested in human wars unless they perceive them as threats. We will likely need a new word to identify these new social interactions. At the same time, war between humans with AGIs assistance is impossible to rule out, and it is thus essential to explore how its nature might change.
Does it change the nature of war?
What is the nature of war?
The nature of war is distilled into what Clausewitz called the ‘wondrous trinity’.[xxxiv] Its elements are a) violence, hatred, and enmity, b) the play of chance and probability, and c) the element of subordination of war to policy and reason.
- Clausewitz identified two types of hostility: hostile feelings or animosity and hostile intentions. Hostile intentions are essentially political in nature, necessary for war to occur and can exist without hostile feelings.[xxxv] The latter is variable in intensity, and war would be an algebraic exercise if absent.[xxxvi]
- Clausewitz states that war is the realm of probabilities. The unfavourable cases are caused by friction: moral and physical depletion (danger and exertion), and lack of knowledge and bad luck (uncertainty and chance).[xxxvii] Estimating the impact of these factors is a matter of judgement and approximation because the extremely high number of cases makes it impossible to calculate mathematically.[xxxviii] Human, limited cognitive capabilities force the commander to make ‘good enough’ decisions.[xxxix]
- Clausewitz is adamant that war has a rational component and it is not ‘something autonomous but always […] an instrument of policy’.[xl] It is the job of the statesman and the commander to establish ‘the kind of war on which they are embarking; neither mistaking it for nor trying to turn it into something that is alien to its nature’.[xli] They should do this while not clouded by hostile feelings and after having correctly judged the probabilities.
a. Hostility and AGIs
Superficially, a perfectly rational entity would not be influenced by feelings like hostility. As discussed, it is not clear if even conscious AGIs would have a purpose other than self-defence. Nonetheless, we can imagine that an AGI might see itself as so precious that any human activity is perceived as hostile. AGIs might thus exist in a state of constant AI-fear, defined as a hyper-rational passion that is very different from our biologically driven fear, and develop both hostile feelings and intent. A ‘dehumanised perception’ may facilitate violence and brutality and even extermination with the awareness of what it is doing.[xlii]
b. Chance and AGIs
A super-intelligence explosion will eventually become asymptotic with perfect knowledge and calculus, effectively realising a so-called ‘Laplace’s Demon’.[xliii] In theory, this entity would suffer almost no friction: it would immediately adjust to events and be relentless in its effort. This is the perfect realisation of war by algebra, and it is a vision incompatible with the trinitarian war. In practice, perfect knowledge is impossible because of nonlinear dynamics: it is impossible to eliminate mismatches between the representation of phenomena and their actuality.[xliv] Nonetheless, an AGI would suffer no friction compared to humans.
As Allen argued, when under humans’ control, our fiat would only be a constraint and a weakness, and the centre of gravity (Schwerpunkt) will become the speed of action and the effect itself.[xlv] War with almost perfect knowledge would no longer be the realm of the military human genius and, as Van Creveld concluded, ‘fighting does not make sense since it can neither serve as a test nor be experienced as fun’.[xlvi]
c. Policy and AGIs
The acceleration of almost frictionless military activities brings forward the issue of policy control over them. We assume that an aware and intentional AGI is always in control of its means and can mediate responses and escalations. The problem arises when humans can access the power of a super-intelligent but not-conscious AI. If you know that the enemy will relentlessly attack you, you must be ready to defend yourself relentlessly. This might just translate into a mindless acceleration of escalation and violence. The not-conscious AIs can be programmed to act within policy limits, but this still accounts for a diminished policy role after the conflict started.
Ultimately, investigating what could happen to the nature of war closer to AI super-intelligence and consciousness shows that there can be extreme cases where one or two elements of the trinity might collapse and become irrelevant. Unexpectedly, only passion might remain a constant element.
Alloui-Cros’ article proved that even narrow AI will not change the validity of Clausewitz’s theory. This article speculates that a super-intelligent and conscious AGI might. It appears that the interaction and conflict with and between super-intelligent conscious AGIs have the potential to be a novel social interaction with a Begriff different from that of purely ‘human’ wars. Following this logic, AGIs would not change the nature of war but an ‘AGI-war’ would have its own different nature. Nevertheless, ‘human’ war is unlikely to disappear, and the participation of an AGI nearing super-intelligence and consciousness has the potential to change its nature.
Brodie suggested that Khan’s ‘On Thermonuclear War’ ‘usefully supplements Clausewitz but […] he does not in any way help to supplant him’.[xlvii] It is possible that, if an AGI emerges, and in anticipation of its super-intelligence and consciousness, we might need a further expansion of Clausewitz’s theory, an ‘On AGI-War’.
[i] Alloui-Cros, Baptiste. ‘Does Artificial Intelligence Change the Nature of War?’. Military Strategy Magazine. 8 (3): 4-8.
[ii] Gray, Colin S. 2006. Another Bloody Century: Future Warfare. Paperback ed. A Phoenix Paperback. London: Phoenix. p. 23.
[iii] Echevarria, Antulio Joseph. 2003. ‘Globalization and the Clausewitzian Nature of War’. The European Legacy 8 (3): 317–32. p. 322–26.
[iv] Vinge, Vernor. 1993. 'The coming technological singularity: How to survive in the post-human era'. Whole Earth Review. Winter 1993.
[v] Torres, Phil. 2016. The end: what science and religion tell us about the Apocalypse. Durham, North Carolina: Pitchstone Publishing. Chapter 5.
[vi] Vinge, 1993.
[vii] Micah Clark is a research scientist from the Florida Institute for Human & Machine Cognition cited in Scharre, Paul. 2018. Army of None: Autonomous Weapons and the Future of War. First edition. New York; London: W. W. Norton & Company. p. 234.
[viii] Payne, Kenneth. 2018. Strategy, Evolution, and War: From Apes to Artificial Intelligence. Washington, DC: Georgetown University Press. pp. 168–72.
[ix] Müller, Vincent C., and Nick Bostrom. 2016. ‘Future Progress in Artificial Intelligence: A Survey of Expert Opinion’. In Fundamental Issues of Artificial Intelligence, edited by Vincent C. Müller, 376:555–72. Synthese Library. Cham: Springer International Publishing; McDermott, Drew. 2007. ‘Artiﬁcial Intelligence and Consciousness’. In 7 The Cambridge Handbook of Consciousness, edited by Philip David Zelazo, Morris Moscovitch, and Evan Thompson, 117–50. Cambridge University Press.
[x] Wang, Pei. 2007. ‘The Logic of Intelligence’. In Artificial General Intelligence. Edited by Goertzel, Ben, and Cassio Pennachin, 31-62. Cognitive Technologies. New York: Springer. p. 31.
[xi] Bostrom, Nick. 2014. Superintelligence: Paths, Dangers, Strategies. First edition. Oxford: Oxford University Press. p. 39.
[xii] Bostrom, 2014, chap. 3.
[xiii] Good, Irving John. 1966. ‘Speculations Concerning the First Ultraintelligent Machine’. In Advances in Computers, 6:31–88. Elsevier.
[xiv] Bostrom, 2014, p. 114.
[xv] Woolgar, Steve. 1985. ‘Why Not a Sociology of Machines? The Case of Sociology and Artificial Intelligence’. Sociology 19 (4): 557–72.
[xvi] Red’ko, Vladimir G. 2007. ‘The Natural Way to Artificial Intelligence’. In Artificial General Intelligence. Edited by Goertzel, Ben, and Cassio Pennachin, 327-351. Cognitive Technologies. New York: Springer. pp. 327–51.
[xvii] Payne, Kenneth. 2018. Strategy, Evolution, and War: From Apes to Artificial Intelligence. Washington, DC: Georgetown University Press. pp. 204–5.
[xviii] McDermott, Drew. 2007. ‘Artiﬁcial Intelligence and Consciousness’. In 7 The Cambridge Handbook of Consciousness., edited by Philip David Zelazo, Morris Moscovitch, and Evan Thompson, 117–50. Cambridge University Press.
[xix] Searle, John R. 1980. ‘Minds, Brains, and Programs’. Behavioral and Brain Sciences 3 (3): 417–24; Perlis, Donald. 1997. ‘Consciousness as Self-Function’. Journal of Consciousness Studies 4 (January): 509–25.
[xx] Clausewitz, Carl von, Michael Howard, and Peter Paret. 1976. On War. Princeton, N.J: Princeton University Press. p. 13.
[xxi] Clausewitz et al. 1976. pp. 13–14.
[xxii] Clausewitz, Howard, and Paret. 1976. pp. 13–15.
[xxiii] Echevarria. 2007. p. 66.
[xxiv] Howard, Michael. 2002. Clausewitz: A Very Short Introduction. Very Short Introductions 61. Oxford; New York: Oxford University Press. Chap. 3.
[xxv] Centeno, Miguel Angel, and Elaine Enriquez. 2016. War & Society. Political Sociology Series. Cambridge, UK; Malden, MA: Polity. p. 13.
[xxvi] Handel, Michael I. 2001. Masters of War: Classical Strategic Thought. 3rd rev. and Expanded ed. London; Portland, OR: F. Cass. p. 109.
[xxvii] Clausewitz, Howard, and Paret. 1976. p. 529.
[xxviii] Clausewitz, Howard, and Paret. 1976. p. 14.
[xxix] For a similar concept, watch Star Trek. 1967. Season 1, Episode 23, ‘A Taste of Armageddon’. Aired February 23, 1967 on NBC.
[xxx] Clausewitz, Howard, and Paret. 1976. pp. 269–70.
[xxxi] Payne. 2021. p. 31.
[xxxii] Minsky, Marvin L. 1968. ‘Matter, Minds, Models’. In Semantic Information Processing, edited by Marvin L. Minsky. MIT Press. pp. 425–32.
[xxxiii] Woolgar, Steve. 1985. ‘Why Not a Sociology of Machines? The Case of Sociology and Artificial Intelligence’. Sociology 19 (4): 557–72. pp. 557–72.
[xxxiv] Clausewitz, Howard, and Paret. 1976. pp. 30–31.
[xxxv] Echevarria. 2007. p. 72.
[xxxvi] Clausewitz, Howard, and Paret. 1976. p. 14.
[xxxvii] Clausewitz, Howard, and Paret. 1976. pp. 48–69.
[xxxviii] Clausewitz, Howard, and Paret. 1976. p. 59.
[xxxix] Simon, H. A. 1956. ‘Rational Choice and the Structure of the Environment.’ Psychological Review 63 (2): 129–38.
[xl] Clausewitz, Howard, and Paret. 1976. p. 30.
[xli] Clausewitz, Howard, and Paret. 1976. p. 30.
[xlii] Centeno and Enriquez. 2016. pp. 18–19.
[xliii] Dale and Laplace. 1995. ch. 13, p. 4.
[xliv] Watts, Barry D. 2012. Clausewitzian Friction and Future War. Washington, D.C: Institute for National Strategic Studies, National Defense University. p. 73.
[xlv] Allen, John R., F. Ben Hodges, and Julian Lindley-French. 2021. Future War and the Defence of Europe. Oxford: Oxford University Press. p. 246.
[xlvi] Van Creveld, Martin. 1991. The Transformation of War. New York: Toronto: New York: Free Press; Collier Macmillan Canada; Maxwell Macmillan International. pp 172–73.
[xlvii] Brodie, Bernard. 1976. ‘The Continuing Relevance of On War’. In On War, by Carl von Clausewitz, edited by Michael Eliot Howard and Peter Paret, 45–58. Princeton University Press.