Military Strategy Magazine  /  Volume 7, Issue 2  /  

The New Unequal Dialogue: Professional Military Advice in the Age of AI-Analytics

The New Unequal Dialogue: Professional Military Advice in the Age of AI-Analytics The New Unequal Dialogue: Professional Military Advice in the Age of AI-Analytics
Illustration Eugen Dobric | Dreamstime.com
To cite this article: Echevarria, Antulio J., “The New Unequal Dialogue: Professional Military Advice in the Age of AI-Analytics,” Military Strategy Magazine, Volume 7, Issue 2, summer 2020, pages 4-8.

Much has been written on artificial intelligence (AI) to date, especially in recent months. In fact, the volume of AI-related literature is increasing almost as fast as AI computational power, which is reportedly doubling every 3.4 months.[i] That literature now includes such disparate topics as the uses of “big data” and the problematics of autonomous weapons systems.[ii] Fortunately, most experts now say the “singularity” moment, when artificial intelligence will surpass that of humans, is still in the distant future.[iii] Nevertheless, “weak” or “narrow” forms of artificial intelligence, such as AI-enhanced analytics, are causing subtle yet critical changes in our daily activities from marketing analysis to competitive sports. One strategy-related activity that may be altered is what American political scientist Eliot Cohen once referred to as the “unequal dialogue” between civilian policymakers and military professionals.[iv] If strategy is the “bridge” between political objectives and military resources, as the late Colin Gray once stated, then the unequal dialogue is the two-way traffic that traverses the bridge.[v] It should be self-evident that a sound strategy requires a good dialogue. But we have not duly considered how weak or narrow forms of AI might affect that exchange; indeed, the potential exists for AI-analytics to make the unequal dialogue more equal, which would in turn have important consequences for a democracy.

This article is necessarily speculative in nature. As one late-nineteenth century American sociologist admitted, science “could not get on without speculation.”[vi] Speculation allows us to get ahead of the change curve, to anticipate future dilemmas, and to begin thinking about what it would take to avoid or resolve them. As noted above, the pace of change with regard to AI computational power is increasing swiftly. For that reason, the time we have available to resolve some of the complex dilemmas that may be created by AI is decreasing rapidly.

I

The unequal dialogue is integral to the use of military force, the crafting of military strategy, and the development of defense policy. Its purpose is to improve the probability of policy success. Accordingly, it involves open and candid discussions in which political and military leaders express their views regarding the advantages or disadvantages of various courses of action. It is unequal (or has been to this point) because the military remains the subordinate player in the dialogue and policymakers have the final say.[vii] Moreover, the inequality of the dialogue supports Cohen’s model of “active control,” whereby policy has the right, if not the duty, to override military commanders and to redirect their efforts at any time. After all, if war’s nature is predominantly (but not exclusively) political, then policy should have the last word when choosing strategic courses of action and overseeing their execution. Additionally, history offers an abundance of examples of how wrong military commanders can be about war and strategy, despite, or perhaps because of, their training and experience. Military advice is, therefore, essentially a matter of opinion, albeit a seasoned opinion in many cases. Indeed, the intense disputes between enemy-centric and population-centric approaches to counterinsurgency doctrine provide but one instance of how contradictory that advice can be.

For purposes of this article, AI simply means any automated process driven by algorithms, or sets of instructions, that improve the system’s performance of an activity, whether it be playing chess or Go, driving cars, or using drones to conduct reconnaissance and surveillance missions.[viii] Improvements occur whenever systems perform these activities because some algorithms are designed to draw data from each event (such as a chess move) and to use those data to rewrite other algorithms. The performance of the system improves, or should, with each iteration of a task. In short, the system appears to be learning from its own experiences.

When linked to analytics, or the computational analysis of data, AI can accelerate the process of extracting meaningful patterns from vast seas of information.[ix] It can explore branches and sequels, accommodate variations in terrain, climate, weather, force structures, and it can assess data from war games, simulations, and historical case studies. Analytics are, of course, as old as arithmetic. But the ability to navigate oceans of data with an artificially intelligent analyst at the helm is new and is turning AI-enhanced analytics into what some pundits have called “prediction machines.”[x] A good prediction is nothing more than an answer that has a higher probability of being correct than other available answers for a given set of circumstances. The more good data we can feed into this process, the greater its probability of generating good predictions. One analysis of economic crises, for instance, predicted internal factors (such as shareholder uneasiness) were more important in triggering downturns than external events.[xi] Similarly, other research has predicted that persuading the population to share critical information about insurgent activities is more important in counterinsurgency campaigns than merely protecting noncombatants from combatants.[xii] Hence, the ongoing debate between enemy-centric and population-centric theories of counterinsurgency can now take a new turn. To be sure, AI-driven analytics cannot predict or guarantee victory. But they can reveal essential causal relationships among data in such a way as to move military opinion closer to verifiable fact.

Utilizing AI-enhanced analytics in this way has significant drawbacks that must be acknowledged. First, it requires accepting the proposition that historical data should guide future decisions. That proposition is a risky one because the circumstances of the past and the present are never the same, except in a controlled environment. Nonetheless, sometimes the proposition is born out. West Point’s football team follows its analytics when deciding whether to “go for it” on fourth down and short yardage situations. Variables such as time left in the game and field position are never quite the same when the decision has to be made; yet the analytics provide a reasonably reliable guideline for “predicting” which decision–going for it, punting, or attempting a field goal–would be most beneficial.[xiii] While the analytics are reliable, they are not always correct; moreover, it is easy to become over-reliant on them. Over-reliance is particularly problematic since incomplete, “bad,” or “poisoned” data and faulty or biased algorithms can undermine any AI system. Gathering good data and protecting them will be critical, as will filtering biases from our algorithms. These drawbacks notwithstanding, we would do well to remember AI-driven analytics need not be flawless; they simply need to outperform human judgment on a consistent basis which, in turn, will give them credibility in the public’s eye.

II

Just as AI-analytics can help improve a team’s performance in competitive sports, so too it can enhance military advice by aiding in the cultivation of the military’s corpus of professional knowledge. Over time, the rapid and iterative analysis of historical and other data made possible through AI-analytics will strengthen professional military advice by eliminating some of the faulty assumptions, or myths, that underpin it. Professional military advice will never be infallible, but it will become more credible and, therefore, more difficult for civilian policymakers to question or to overrule. Recall the 2003 testimony of US Army Chief of Staff, General Eric Shinseki. When questioned by the US Senate Armed Services Committee about the number of troops it would take to stabilize Iraq, Shinseki said he believed “several hundred thousand soldiers” would be needed.[xiv] His advice was the product of his considerable experience and training. But it was also subjective and, for that reason, imminently contestable. In fact, Defense Secretary Donald Rumsfeld and Deputy Secretary of Defense Paul Wolfowitz dismissed it as “wildly off the mark.”[xv] With AI-enhanced analytics added to the debate, however, tomorrow’s professional military advice may be slightly—but seldom wildly—off the mark. Again, it is less a question of whether Shinseki was right (some research claims he was not) than the fact that Rumsfeld and Wolfowitz were wrong, egregiously wrong.[xvi]

Accordingly, military advice will likely have to be taken more seriously in the future and it may well shape policy more than policy shapes it. Consider the additional persuasive power General Stanley McChrystal’s 2009 strategic assessment of Afghanistan would have had with the benefit of AI-enhanced analytics.[xvii] At a minimum, it put enormous pressure on the Obama administration to commit more troops to Afghanistan; it also fueled (or refueled) the strategic debate along partisan lines.[xviii] If the predictive accuracy of such strategic assessments should increase, the military is likely to become more influential in shaping US defense and foreign policies. This concern is an especially important one given the high esteem in which the American public tends to hold its military professionals, compared to the generally low regard it has for its politicians.[xix] Most military professionals do not have to fight an uphill battle for credibility on defense matters; however, most policymakers do. Taken collectively, these factors suggest we may see the weight of the unequal dialogue either become more balanced—or shift in favor of the military.

To be sure, political leaders will retain the final word, legally and constitutionally. But in practice they may find themselves tacitly deferring to military advice; for they will have little to gain by contradicting it. The situation will thus be one of de facto military, rather than civilian, supremacy in matters of defense policy and military strategy. If so, war’s grammar (its guiding principles) may increasingly direct policy’s logic. The examples of Rumsfeld and Wolfowitz, though more than a decade old, still serve as warnings to policymakers—the penalty for being wrong in wartime is high, as well it should be. Rumsfeld, Wolfowitz, and others ultimately lost their positions within the US government. If the unequal dialogue becomes more equal, or favors military professionals, it is also likely to undermine Cohen’s model of active control, whereby political leaders insert themselves vigorously and repeatedly in the process of strategy formulation. Instead, we may see the emergence of what Cohen called “normal control,” an ideal rather than a real model, in which political leaders allow the military to conduct operations largely unmolested.[xx]

Furthermore, we cannot rely on the military’s professional code to prevent it from asserting itself in strategy debates, particularly if it believes its perspective is buoyed by a body of knowledge that has been scientifically validated. Over the course of US history, American political and military leaders have occasionally wrestled for control over the country’s defense policies, and they will continue to do so. Some US political leaders have had to take extraordinary measures to counter stubborn or outspoken military experts. President Theodore Roosevelt orchestrated a debate over “all big-gun” battleships in 1907 that led to the “dethroning” of Alfred Thayer Mahan, America’s foremost naval expert at the time.[xxi] President Calvin Coolidge instigated the court-martial of William “Billy” Mitchell, who had repeatedly criticized the US government for not establishing an independent air service.[xxii] America’s “Revolt of the Admirals” in 1949 and its “Revolt of the Generals” in 2006 are examples of similar problems, wherein military professionals sought to coerce the government into pursuing a particular defense policy.[xxiii] As American social scientist Samuel Huntington once warned, democracies may well have more to fear from the military expert armed with superior technical knowledge than they might from the overt threat of a coup.[xxiv] In the age of AI-analytics, the importance of military experts will almost certainly increase.

At the same time, there is nothing to preclude policymakers from procuring their own sources of AI-analytics to counter those of military experts. The result will be an analytics arms race of sorts with each side attempting to outdo the other. In a word, analytics will become weaponized. But that will only make them more important, not less, in any strategic debate. Like footnotes in a scholarly book or article, analytics will be required sources for any serious strategic publication. The question will not be whether analytics are trustworthy, but rather whose or which analytics are most trustworthy. Eventually, the company with the better track record for reliable predictions will become the “Harvard of strategic analytics.” Its voice will count more than the others and buying that voice will not be cheap. But the larger issue is that outsourcing of strategic analyses in this way introduces a third interlocutor into the unequal dialogue. The role of that third interlocutor, moreover, will not necessarily strengthen the political side of the dialogue and may routinely weaken it depending on whether the military’s professional knowledge rests on firm foundations or intuition.

All this is not to say the military would not experience considerable internal friction in the years ahead. The US military hardly speaks with one voice, despite the passing of the Goldwater-Nichols Act (1986) decades ago, and the services’ subsequent movement toward jointness. Extant divisions and rivalries among and within the services would persist for a time, and may worsen considerably, as debates arise over how best (or whether) to incorporate the findings made possible by the new data sciences into a collective body of professional knowledge. Military experts, too, would become dependent on AI-driven analytics and would feel pressure to keep pace with a growing body of knowledge, or risk becoming irrelevant. Expertise, after all, can have a relatively brief shelf life. Analytics would also become integral to the congressional hearings and courts-martials that follow any failed military action. We can easily imagine the military establishment censuring some of its commanders for not following those courses of action the official analytics of the day had recommended. Conversely, we can easily imagine military leaders being relieved for not knowing whether or when to override or discount the official analytics. We can only guess what analytics might have predicted about the Japanese attack on Pearl Harbor in 1941, the debacle of the Bay of Pigs invasion in 1961, or the failure of Desert One in 1980. As always, scapegoats will remain essential in politics and in war, and AI-analytics will cut both ways.

Conclusion

AI’s famed singularity moment, which is still in the distant future, has drawn a disproportionate amount of our attention. While some experts claim the way ahead is for humans “to trust that AI knows better than them,” this essay has put forth the counterclaim that placing our trust AI-analytics is the equivalent of a pseudo-singularity moment.[xxv] Such a moment may be neither harmful nor avoidable; however, we should approach it with an informed sense of its potential effects. Unfortunately, since AI computational power is doubling rapidly, the time we have available to discuss, debate, and perhaps prevent some of its effects is diminishing just as rapidly. Moreover, preliminary evidence suggests Generation Alpha (the “screenagers” born between 2010 and 2024) is more willing to trust AI and the data sciences than its predecessors have.[xxvi] As Generation Alpha matures, many of the speculations entertained in this essay stand to become less theoretical and more real.

Chief among these is the possibility that AI-analytics may render the future unequal dialogue more equal, or even unequal in the direction of the military. If so, then it will also challenge the fundamental proposition that policy’s logic is entitled to override military grammar, even at the cost of the occasional failed operation. In the past, policy’s primacy helped preserve civilian supremacy over the military, a vital principle for a democracy. However, the reality may turn out to be that, while policy remains entitled to override military grammar, it is unwilling to risk doing so. Hence, we will need to find ways to offset this de facto military supremacy. Yet it may also be time to discuss whether military grammar should, in fact, have some veto power over policfy’s logic, that is, whether a more equal dialogue might benefit the republic. Either way, we also need to realize the trust we place in any AI system is not fully ours to control.

References

[i] Cliff Saran, “Stanford University Finds that AI is Outpacing Moore’s Law,” ComputerWeekly.com; https://www.computerweekly.com/news/252475371/Stanford-University-finds-that-AI-is-outpacing-Moores-Law?vgnextfmt=print.
[ii] Compare: Davis and Nicholas Keeley, “The Input-Output Problem: Managing the Military’s Big Data in the Age of AI,” War on the Rocks, Feb. 13, 2020.
[iii] M. Horowitz, “The Promise and Peril of Military Applications of Artificial Intelligence,” Bulletin of the Atomic Scientists; https://thebulletin.org/2018/04/the-promise-and-peril-of-military-applications-of-artificial-intelligence/.
[iv] Eliot A. Cohen, Supreme Command: Soldiers, Statesmen, and Leadership in Wartime (New York: Anchor, 2003) 10.
[v] Colin Gray, The Strategy Bridge: Theory for Practice (Oxford: Oxford University Press, 2010).
[vi] Franklin H. Giddings, Principles of Sociology (New York: Macmillan, 1896) xvi-xvii, emphasis original.
[vii] William E. Rapp, “Ensuring Effective Military Voice,” Parameters 46, 4 (Winter 2016-17): 13-25.
[viii] Compare: “Artificial Intelligence for Executives,” SAS White Paper, p. 2.
[ix] https://www.forbes.com/sites/forbesagencycouncil/2018/08/01/do-you-know-the-difference-between-data-analytics-and-ai-machine-learning/#7592c0225878.
[x] Ajay Agrawal, Joshua Gans, Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Cambridge, MA: Harvard Business Review Press, 2018).
[xi] Dion Harmon, et al, Anticipating Economic Market Crises Using Measures of Collective Panic, 2015; https://dx.plos.org/10.1371/journal.pone.0131871
[xii] Eli Berman, Joseph H. Felter, Jacob N. Shapiro, Small Wars, Big Data: The Information Revolution in Modern Conflict (Princeton: Princeton University Press, 2018), 18.
[xiii] The team’s use of analytics has not always brought success, but it has led to a respectable fourth-down conversion rate of 85.7 percent in 2018 and 66.7 percent in 2019. https://www.forbes.com/sites/donyaeger/2019/09/20/an-eye-on-the-future-army-coach-jeff-monken-embraces-analytics-old-school-football-to-create-a-fearless-business-lesson/#391d49186be6.
[xiv] https://www.bing.com/videos/search?q=shinseki+testimony+to+congress+feb+25%2c+2003&&view.
[xv] Eric Schmitt, “Pentagon Contradicts General on Iraq Occupation Force’s Size,” New York Times, Feb. 28, 2003.
[xvi] Andrew J. Enterline, J. Michael Greig, and Yoav Gortzak, “Testing Shinseki: Speed, Mass and Insurgency in Post-war Iraq,” Defense and Security Analysis 25, 3 (Aug 2009): 235-53.
[xvii] Commander, NATO International Security Assistance Force, Initial Assessment, 30 August 2009. The references in his assessment included a list of authorizations. Suppose it also featured the name of a well-respected defense analytics organization with a formidable database of counterinsurgency and counterterrorism campaigns.
[xviii] Compare: David M. Brown, “Afghanistan: The McChrystal Assessment,” The Atlantic, Sept. 1, 2009; https://www.theatlantic.com/politics/archive/2009/09/afghanistan-the-mcchrystal-assessment/24174/; and Eric Schmitt and Thom Shanker, “General Calls for More US Troops to Avoid Afghan Failure,” The New York Times, Sept. 20, 2009; https://www.nytimes.com/2009/09/21/world/asia/21afghan.html.
[xix] https://www.pewresearch.org/politics/2019/07/22/trust-and-distrust-in-america/#fn-20070758-2.
[xx] Eliot A. Cohen, “Supreme Command in the 21st Century,” Joint Force Quarterly (Summer 2002), 51.
[xxi] Philip A. Crowl, “Alfred Thayer Mahan: The Naval Historian,” Peter Paret, ed., Makers of Modern Strategy: From Machiavelli to the Nuclear Age (Princeton: Princeton University Press, 1986), 444-77.