Military Strategy Magazine  /  Volume 5, Issue 1  /  

Strategic Theory and the Logic of Computational Modeling

Strategic Theory and the Logic of Computational Modeling Strategic Theory and the Logic of Computational Modeling
To cite this article: Elkus, Adam, “Strategic Theory and the Logic of Computational Modeling,” Infinity Journal, Volume 5, Issue 1, fall 2015, pages 33-38.

Computational theories, models, and simulations are revolutionizing countless areas of research.[i] Could they do the same for strategy? Yes, but only if strategic theory’s core concepts and questions can be captured within the logic of computational modeling. This article justifies this argument by exploring why previous attempts at modeling strategy have failed and why different assumptions about modeling could yield more positive results. The article investigates this debate by first examining challenges in strategic theory and why mathematics and models have not been attractive to strategic researchers. Next, it is explained how computational modeling may be of assistance to inquiries in strategic theory. Finally, the theoretical insights of the prior section are practically outlined by a comparative analysis of how research concerns in strategy can be best matched with different styles of computer program design.

Models and Mischief

Strategic theory is a complex, evolving discipline that has both a storied past and an uncertain future.[ii] Generally, strategists are far more interested in the process of goal-oriented, adversarial strategic interaction than other areas of inquiry.[iii] Many disciplines concerned with the use of organized violence treat the formulation and dynamics of strategy as a black box. A core tenet of faith among many researchers in strategy is that the process by which political communities organize and employ organized violence is necessary precise because other disciplines treat the rationalization of force as an instrument of policy as a trivial matter. It is not enough simply to write about the context in which force is used or the means available for its use. Rather, we must also consider the object to which force is directed and how it is directed towards such an object.[iv]

What are current research problems in strategic theory? Because strategy as a discipline deals with processes that are often ambiguous, poorly understood, and otherwise ill-structured, it risks the production of just-so stories and other problems of internal consistency and explanatory rigor.[v] Another issue lies in basic assumptions of instrumental rationality and coherence that are often unconsciously used by strategic thinkers and how they differ from what we know about human behavior in the real world.[vi] Strategy has always been criticized for rationalizing what may be unrationalizable.[vii] It does not hurt that it is possible to retroactively impute strategies or otherwise rationalize them or make inferences about them without any heed to whether or not doing can be meaningfully justified.[viii] Finally, while all research programs rest on untestable assumptions, those assumptions’ real test is whether they bear fruit in terms of novel discoveries and continued disciplinary progress.[ix] This generation of new hypotheses and ideas is conceptually separate from the reactive act of adjusting existing theories and ideas to protect them from criticism.

There have been several kinds of research methodologies utilized to pursue such analytical aims. One method has been the writing of books and articles that describe the underlying logic and content of ideas about how the process of strategy works. For example, in Arms and Influence, Thomas Schelling argued for a conception of strategy oriented around the “diplomacy of violence,” the process of how violence and perceptions about its use could be used to compel and deter.[x] Another method has been the usage of qualitative case studies to draw out the logic of a theoretical idea by analyzing its emergence in a situation of interest. Carl von Clausewitz himself did so, famously, in his historical analyses of campaigns and other raw material for his theories.[xi]

Verbal theory is certainly useful, but the ambiguity of natural language often can mask hidden assumptions or important issues in its internal consistency. Writers in the political realism tradition have often sought to strip away such ambiguities and lay the logic of how power is contested bare.[xii] Second, case studies can, if not utilized carefully, may be manipulated to tell desired narratives.[xiii] Given the powerful ambiguities inherent in characterizing the context, preferences, knowledge, goals, methods, and success or failure of particular strategic actors, strategic researchers need to recognize the problems of fitting history into the frame of theory or other abstractions.[xiv]

However, strategic researchers have been wary of changing course simply due to the disappointing record of competing records and approaches. Ever since Bernard Brodie called for strategy to be regarded as a science akin to economics and other social scientific theories of choice, researchers in various disciplines have sought to apply statistical, mathematical, and computational tools to the study of strategic behavior.[xv] Their efforts have yielded some useful insights, but today very little work in strategic theory is done with these tools and methods. Why? There are multiple valid explanations, but a method is only useful if it helps the researcher investigate problems of interest to them and is effective compared to the alternatives. In general, the usage of quantitative, mathematical, and computational tools in strategic practice has often been marked by an inability to properly select tools for investigating strategic theory’s own unique disciplinary questions, problems, and topics.[xvi]

Many mainstream methods in the social sciences are “behaviorist” in nature.[xvii] Theory assumes a set of fixed entities with variable attributes.[xviii] Theoretical claims are translated into hypotheses and in turn are tested by operationalizing the hypotheses into a statistical model that aims to account for observational data. There are multiple reasons why this has proved problematic for strategy. First, it is questionable whether doing so truly allows tests of the theory. The phenomena being studied is not linear, additive, or straightforward. Nor are the processes involved in many complex situations necessarily consistent with the assumptions of enumerative probability behind much of basic social science statistics.[xix] Even when this is assumed away, a core problem lies in the assumption that strategies are equivalent to simple, mutually exclusive choices (e.g, use airpower or use landpower). Finally, the data to test such assumptions is often elusive.[xx] And even when it exists, the researcher may nonetheless fail to find the right measure for how to deal with the complexities of strategic interaction.[xxi]

While the prior set of methods are mainly inductive in form, another tradition is deductive and highly formal in nature.[xxii] It is often rooted around the construction of mathematical models of both individual and group choice, allowing the researcher to formally demonstrate that certain tendencies inevitably result from the structure of the situation.[xxiii] If we assume X, Y, and Z, what logically follows? While this method is has undoubtedly more roots in strategy’s past than many alternatives, it also has some flaws.[xxiv] Certainly criticisms may be made about the assumptions such ideas make about the actors and situations being surveyed.[xxv] However, the most basic problem is simply that it represents strategy as a discrete choice. Most work in strategic theory today makes no assumption, rather arguing that strategy is either a bridge between goals and behavior or a way of changing the context of the interaction to put oneself in a commanding position in regard to some goal.[xxvi]

Computational Models and Mechanisms

Can computer simulation help? It can, but only when used in view of a particular philosophy of programming and modeling. Computer models may be seen as a unique “third” method of inquiry in between deduction and inductive methods of research and theory development.[xxvii] Computational modeling is far more similar in philosophy and outlook to existing research methods than many may believe, and where it is different it offers a useful complement. Computational modeling’s emphasis on investigating complex and often ill-understood and intangible structures and processes and interest in the use of qualitative structure and mechanisms for explanation is both similar to existing approaches in strategic analysis and also when different provides a methodological complement to traditional methods of strategic theory research and theory development. Computational methods offer numerous useful similarities to existing methods in strategic research. They are oriented around processes, mechanisms, and structures, eschewing pure parsimony and black boxes in favor of a way to explain how some underlying mechanism or interaction produces the behavior or outcomes of interest.

Like strategic researchers, computational modelers are more interested in explanation, proof of concept, assumptions, and illumination of hierarchy and process than simply providing the most parsimonious fit possible for the “data. “Computational modeling may be a useful tool to use when trying to get a hold of the process of adversarial behavior and choice matters more than anything else. Computational models may be most useful in simulating the context and decision processes of strategic actors as well as internal cognitive and behavioral elements that figure into these processes. To understand how and why, a brief review of the history and philosophy of research with computer models and computational theories is provided.

In many disciplines, computer simulation has been utilized for purposes of research. Computer simulation may be regarded as a “third” approach to theory that is not necessarily inductive or deductive, but combines features of the two.[xxviii] The mode of analysis is discovering how the relation between the environment, task, and the agent produces behavior.[xxix] Theory development involves the construction of computer programs whose algorithms and data structures encode and formalize theories or aspects of theories.[xxx] Empirical experiments are performed utilizing these programs in the hope that they may shed light on systems whose structure and operation are complex and often unobservable or difficult to quantify.[xxxi]

What many fields that utilize such methods share lies in an emphasis on process, hierarchy, choice, and mechanism.[xxxii] Many sciences have elaborate hierarchies and taxonomies of the phenomena they study. But how the components of a system interact and produce dynamic behavior is a much trickier matter.[xxxiii] While other sciences focus on quantitative data, a computational theory often may be judged by its ability to present the full range of important behaviors of interest, the breadth of situations to which it is applicable, and the parsimony of the mechanisms it uses to explain behavior.[xxxiv] Hence, this approach strikes a balance between the formality and precision of statistical and mathematical models and the sometimes vague and static nature of purely verbal theories.

Still, a skeptical audience may find the idea of building computer programs and then experimentally evaluating them suspicious. How do we know that they have any relationship to reality? Some of these criticisms stem from their lack of predictive power. However, it may be countered that utilizing prediction as the sole criteria of modeling value is problematic in the extreme. Prediction is often most achievable when phenomena of interest is stationary and regular, it is much more problematic when the object of study does not conform to such postulates. Additionally, there are many other reasons to model other than to predict.[xxxv] Models can explain phenomena of interest, guide data collection, suggest useful analogies, cast light on core uncertainties, expose hidden assumptions, bound outcomes to plausible ranges, illuminate core aspects of interest, challenge conventional wisdom, and generally reveal what is simple to be complex.

This laundry list can be simplified by saying that computational models help us examine mechanisms behind theory. The idea of a mechanism is of some shape or form has become increasing popular in the social and behavioral sciences.[xxxvi] Mechanisms entail some entities and activities that produce regularities of interest to the researcher. [xxxvii] One half of the explanation is how the system’s internal parts interact to product external behavior; the other lies in the way in which mechanism connects observed events of interest[xxxviii]. This suggests two primary uses of mechanisms in computational models that are broadly similar to research traditions in strategy: formalizing theory and hypothesis discovery.

By constructing a computational artifact that renders a theory precise in its qualitative assumptions and performing experiments with it, researchers attain the opportunity to discover flaws and hidden assumptions that might not otherwise be clear from the verbal theory alone. This may be an interesting theoretical result in and of itself or a spur to further research. Likewise, building a computational artifact and then performing experiments with it may suggest interesting new hypotheses for future research. Even familiar situations may look very different when their core assumptions are altered and the results simulated.[xxxix] Work in simulation of military strategy, deterrence, and decision behavior has focused on both research aims.[xl]

Computer Programming for Strategic Theory: A User’s Manual

The next natural question a skeptic might ask is “sounds great in theory, but how do I actually program something of intellectual value?” All tools have limitations, and computer programming is no different. That said, computers, while limited and often crude tools, at least offer an array of diverse solutions for the discerning researcher. This is illustrated through a brief review of practical advantages, tradeoffs, and limitations of several well-known programming languages, their program design philosophies, and their respective potentials for strategic research. The previous section identified two basic advantages of computational models in regards to strategy. Computational models may be most useful in simulating the context and decision processes of strategic actors as well as internal cognitive and behavioral elements that figure into these processes. As a basic proof of concept, it will be explained how both can be modeled using object-oriented and symbolic programming respectively.

First, if there is one thing that computer programs are useful for, it is representing the structure and process surrounding complex strategic decisions and interactions. In social science simulations, structure and process is often represented and computed through the use of what are called “object-oriented” programming languages. Examples of object oriented programming languages include Python, Java, and C++.[xli] Object oriented programs may be seen as modular blocks that divide the world into classes of different types of objects and subclasses that inherit characteristics from more general classes. Object oriented program design figures heavily into agent-based simulations of complex social phenomena.[xlii] One can represent interacting components of a system, such as the interacting elements of a society engulfed by civil war, quickly and painlessly.[xliii]

The greatest strength of object oriented program design for strategic theory is that many interacting and heterogeneous elements can be simulated simultaneously. Researchers in security and strategy understand that the structures and processes they are simulating have variegated components and dynamics and often highly nonlinear interactions.[xliv] Moreover, multi-scale regularities often interact with each other.[xlv] Because instances of objects in object oriented programs have both data fields to store attributes and procedures to perform actions, they allow a kind of representational flexibility that traditional programming languages that separate data and algorithms do not.[xlvi]

Assume, for example, that we would like to create a model of Stephen Peter Rosen’s theory of military innovation.[xlvii] Rosen posits military innovation as a struggle between components of a military organization to determine its destiny. However, military organizations are also responsive to external strategic shifts. Various components of the military organization might be represented as objects and given attributes (such as rank and promotional details) as well as the ability to struggle to enact their proposed course of action. They might socially interact with each other according to the principles of Rosen’s theory. The external environment might be represented as a simulated world consisting of other such military organizations similarly reacting to technological and strategic changes, and so on. This may confirm existing intuitions or suggest neglected assumptions in Rosen’s theory that may merit further investigation, as is the pattern in many models of this type.[xlviii]

Other kinds of research that focus more on beliefs, concepts, images, and other intangibles or the representation of complicated plans and goals necessitate what are called “symbolic” programming languages (such as Lisp, Scheme, and Prolog) that can encode less clear cut kinds of concepts and relationships and offer enormous flexibility for domain-specific use.[xlix] The Lisp programming language, for example, is popular in cognitive science and artificial intelligence because of its “stratified design” approach.[l] Each level of a Lisp program may be regarded as a distinct layer built on a sublevel, and so on. Each level, may, in other words, refer to a different type of concept or abstraction. It is no wonder that this makes Lisp ideal for investigating complex decision processes and the thinking and reasoning behind them. Lisp also offers the flexibility of a “dynamic” language that can be utilized for rapid experimentation, prototyping, and even extension for domain-specific problems.[li]

Lisp and similar languages are useful primarily because they can represent with great detail intangible and often highly tacit strategic research subjects such as plans, problem-solving concepts, and memories in individuals and organizations. For example, analogical reasoning and use of prior historical frameworks is often seen in research on strategy and decision-making.[lii] Defense planning under conditions of uncertainty, processes of reasoning and assessment, and decision-making are also core elements of strategy where cognition and behavior intersect.[liii] This is doubly true when explicit reference must be made to cognitive-affective attributes of adversarial behavior.[liv] Finally, representing highly knowledge-rich strategies and tactics themselves and their various attributes requires flexible tools and representational faculties. Representing the proposed theory of victory for special operations warfare, for example, would necessitate modeling the intersection of both sequential and cumulative strategic approaches to the planning of campaigns.[lv]

Lisp and similar languages can account for such issues through their representational flexibility. Assume, for example, that we would like to build a model of how a campaign is planned to investigate controversy over a particular theory or approach of campaign planning.[lvi] One could utilize Lisp’s capacity for representing symbolic knowledge to build a hierarchy of concepts and sub-concepts representing the knowledge and goals of military decision makers as well as rules for transforming the knowledge into behavior. It may be discovered during the production of such a model that it produces errors, flaws, or seemingly irrational outputs that later can be traced back to inherently flawed assumptions about the reasoning process that guided the plan.[lvii] Perhaps this may suggest some interesting new modes of research about campaign design that may not have otherwise occurred to researchers.


Computational modeling offers a promising new direction for strategic theory. It is true that modeling in general has often failed to add intellectual value to discussion of strategic theory and strategic problems. However, computational modelers and strategic theory researchers share some important philosophical and methodological similarities in how they approach their respective areas of study. Of course, computer modeling cannot and should not replace more traditional ways of analyzing strategy. Still, strategic researchers – despite their understandable suspicion of models and simulations – nonetheless should carefully consider how the logic of computational modeling could help improve strategic theory.


[i] Karp, Richard M. “Understanding science through the computational lens.” Journal of Computer Science and Technology 26.4 (2011): 569-577, Denning, Peter J., and Paul S. Rosenbloom. “The profession of IT Computing: the fourth great domain of science.” Communications of the ACM 52.9 (2009): 27-29, Cioffi‐Revilla, Claudio. “Computational social science.” Wiley Interdisciplinary Reviews: Computational Statistics 2.3 (2010): 259-271, Lazer, David, et al. “Life in the network: the coming age of computational social science.” Science (New York, NY) 323.5915 (2009): 721, Conte, Rosaria, et al. “Manifesto of computational social science.” The European Physical Journal Special Topics 214.1 (2012): 325-346, Wing, Jeannette M. “Computational thinking.” Communications of the ACM 49.3 (2006): 33-35, and Thagard, Paul. Computational philosophy of science. MIT press, 1993.
[ii] See Hoffman, Frank G. “Enhancing American strategic competency.” Liberal Wars: Anglo-American Strategy, Ideology and Practice (2015): 86-107, Freedman, Lawrence. “Does Strategic Studies have a Future?” in Strategy in the Contemporary World: 391-409, Marshall, Andrew W. “Strategy as a Profession for Future Generations.” On Not Confusing Ourselves: Essays on National Security Strategy in Honor of Albert and Roberta Wohlstetter (1991): 302-311, Betts, Richard K. “Should strategic studies survive?.” World Politics 50.01 (1997): 7-33, Betts, Richard K. “Is strategy an illusion?” International Security 25.2 (2000): 5-50, Strachan, Hew. The Direction of War: Contemporary Strategy in Historical Perspective. Cambridge University Press, 2013, and Strachan, Hew. “The lost meaning of strategy.” Survival 47.3 (2005): 33-54.
[iii] See Smith M. L. R., and John Stone. “Explaining strategic theory.” Infinity Journal 4 (2011): 27-30.
[iv] Betts, Richard K. “Should strategic studies survive?” World Politics 50.01 (1997): 7-33.
[v] For an explanation of those problems in other disciplines, see Olson, Mark E., and Alfonso Arroyo-Santos. “How to Study Adaptation (and Why To Do It That Way).” The Quarterly Review of Biology 90.2 (2015): 167-191 and Dennett, D. C. “The Frame Problem of AI”, Philosophy of Psychology: Contemporary Readings 433 (2006).
[vi] Payne, Kenneth. The Psychology of Strategy: Exploring Rationality in the Vietnam War. Oxford University Press. 2015.
[vii] Bull, Hedley. “Strategic Studies and Its Critics.” World Politics 20.04 (1968): 593-605.
[viii] Betts, Richard K. “Is strategy an illusion?” International Security 25.2 (2000): 5-50.
[ix] Lakatos, Imre. The methodology of scientific research programmes. Cambridge University Press.
[x] Schelling, Thomas C. Arms and Influence: With a New Preface and Afterword. Yale University Press, 2008.
[xi] Von Clausewitz, Carl. Carl Von Clausewitz: Historical and Political Writings. Princeton University Press, 2014.
[xii] Burnham, James. The Machiavellians, defenders of freedom. Gateway Books, 1987, Von Hilgers,
[xiii] Neustadt, Richard E. Thinking in time: The uses of history for decision makers. Simon and Schuster, 2011.
[xiv] Freedman, Lawrence. Strategy: a history. Oxford University Press, 2013.
[xv] See Brodie, Bernard. “Strategy as a Science.” World Politics 1.04 (1949): 467-488, Bousquet, Antoine. The scientific way of warfare: Order and chaos on the battlefields of modernity. Vol. 1. Cinco Puntos Press, 2009., Edwards, Paul N. The closed world: Computers and the politics of discourse in Cold War America. MIT Press, 1997, and Freedman, Lawrence. “Social Science and the Cold War.” Journal of Strategic Studies ahead-of-print (2015): 1-21.
[xvi] See Murray, Williamson. “Clausewitz out, computer in: military culture and technological hubris.” The National Interest (1997): 57-64, McMaster, H. R. “The human element: When gadgetry becomes strategy.” World Affairs (2009): 31-43, Gibson, James William. The perfect war: Technowar in Vietnam. Atlantic Monthly Press, 2000, Deitchman, Seymour J. “The” Electronic Battlefield” in the Vietnam War.” The Journal of Military History 72.3 (2008): 869-887,and Edwards, Paul N. The closed world: Computers and the politics of discourse in Cold War America. MIT Press, 1997.
[xvii] Easton, David. “Political science in the United States past and present.” International Political Science Review 6.1 (1985): 133-152.
[xviii] See Abbott, Andrew. “Transcending General Linear Reality.” Sociological Theory 6.2 (1988): 169-186 and De Marchi, Scott. Computational and mathematical modeling in the social sciences. Cambridge University Press, 2005.
[xix] This is addressed in Tecuci, Gheorghe, et al. “Computational approach and cognitive assistant for evidence-based reasoning in intelligence analysis.” International Journal of Intelligent Defence Support Systems 5.2 (2014): 146-172 and Schum, D. A., et al. “Toward cognitive assistants for complex decision making under uncertainty.” Intelligent Decision Technologies 8.3 (2014): 231-250. The assumptions of classical probability which both frequentist and Bayesian probability conform to do not hold for situations in which it is difficult if not impossible to determine probabilities via counting. See Tecuci, Gheorghe, et al. “Recognizing and Countering Biases in Intelligence Analysis with TIACRITIS.” STIDS. 2013 for more.
[xx] Watts, Barry D. “Ignoring reality: Problems of theory and evidence in security studies.” Security Studies 7.2 (1997): 115-171.
[xxi] Roche, James G., and Barry D. Watts. “Choosing analytic measures.” The Journal of Strategic Studies 14.2 (1991): 165-209.
[xxii] See Clarke, Kevin A., and David M. Primo. A model discipline: Political science and the logic of representations. Oxford University Press, 2012 and Morgan, Mary S. The world in the model: How economists work and think. Cambridge University Press, 2012.
[xxiii] Kimbrough, Steven Orla. Agents, games, and evolution: Strategies at work and play. CRC Press, 2011.
[xxiv] See Philipp. War games: a history of war on paper. MIT Press, 2012, Leonard, Robert. Von Neumann, Morgenstern, and the creation of game theory: From chess to social science, 1900–1960. Cambridge University Press, 2010, and Ayson, Robert. Thomas Schelling and the nuclear age: strategy as social science. Routledge, 2004.
[xxv] See Erickson, Paul, et al. How reason almost lost its mind: The strange career of Cold War rationality. University of Chicago Press, 2013 and McMaster, H. R. “The Uncertainties of Strategy.” Survival 57.1 (2015): 197-208.
[xxvi] See Gray, Colin S. The strategy bridge: theory for practice. Oxford University Press, 2010 and Dolman, Everett. Pure Strategy: Power and Principle in the Space and Information Age. Routledge, 2004.
[xxvii] Axelrod, Robert. “Advancing the art of simulation in the social sciences.” Simulating social phenomena. Springer Berlin Heidelberg, 1997. 21-40.
[xxviii] Simon, Herbert A. The sciences of the artificial. MIT Press, 1996.
[xxix] Cohen, Paul R. Empirical Methods for Artificial Intelligence. MIT Press, 1995.
[xxx] See Kuipers, Theo AF. “Computational Philosophy of Science.” Structures in Science. Springer Netherlands, 2001. 289-315, and Thagard, Paul. Computational philosophy of science. MIT press, 1993.
[xxxi] See Newell, Allen, and Herbert A. Simon. “Computer science as empirical inquiry: Symbols and search.” Communications of the ACM 19.3 (1976): 113-126, Simon, Herbert A. “Artificial intelligence: an empirical science.” Artificial Intelligence 77.1 (1995): 95-127, and Simon, Herbert A. “What is an “explanation” of behavior?.” Psychological Science 3.3 (1992): 150-161.
[xxxii] Heyck, Hunter. Age of System: Understanding the Development of Modern Social Science. John Hopkins University Press, 2015, Mirowski, Philip. Machine dreams: Economics becomes a cyborg science. Cambridge University Press, 2002, Edwards, Paul N. The closed world: Computers and the politics of discourse in Cold War America. MIT Press, 1997, Erickson, Paul, et al. How reason almost lost its mind: The strange career of Cold War rationality. University of Chicago Press, 2013, Boden, Margaret Ann. Mind as machine: A history of cognitive science. Oxford University Press, 2006, and Kline, Ronald R. The Cybernetics Moment: Or Why We Call Our Age the Information Age. John Hopkins University Press, 2015.
[xxxiii] See Rosenbloom, Paul S. “A new framework for computer science and engineering.” Computer 37.11 (2004): 23-28, and Denning, Peter J., and Paul S. Rosenbloom. “The profession of IT Computing: the fourth great domain of science.” Communications of the ACM 52.9 (2009): 27-29.. and Rosenbloom, Paul S. On computing: the fourth great scientific domain. MIT Press, 2012.
[xxxiv] Cassimatis, Nicholas L., Paul Bello, and Pat Langley. “Ability, Breadth, and Parsimony in Computational Models of Higher‐Order Cognition.” Cognitive Science 32.8 (2008): 1304-1322.
[xxxv] Epstein, Joshua M. “Why model?” Journal of Artificial Societies and Social Simulation 11.4 (2008): 12.
[xxxvi] Hedström, Peter, and Petri Ylikoski. “Causal mechanisms in the social sciences.” Annual Review of Sociology 36 (2010): 49-67.
[xxxvii] Machamer, Peter, Lindley Darden, and Carl F. Craver. “Thinking about mechanisms.” Philosophy of science (2000): 1-25.
[xxxviii] Glennan, Stuart S. “Mechanisms and the nature of causation.” Erkenntnis 44.1 (1996): 49-71.
[xxxix] Holme, Petter, and Fredrik Liljeros. “Mechanistic models in computational social science.” arXiv preprint arXiv:1507.00477 (2015).
[xl] For some basic examples of this, see Carbonell, Jaime G. “Counterplanning: A strategy-based model of adversary planning in real-world situations.” Artificial Intelligence 16.3 (1981): 295-329, Thagard, Paul. “Adversarial problem solving: Modeling an opponent using explanatory coherence.” Cognitive Science 16.1 (1992): 123-149, and Bringsjord, Selmer, et al. “Nuclear deterrence and the logic of deliberative mindreading.” Cognitive Systems Research 28 (2014): 20-43. Some advanced examples may be found in Kott, Alexander, and William M. McEneaney, eds. Adversarial reasoning: computational approaches to reading the opponent’s mind. CRC Press, 2006, Hudson, Valerie M. Artificial intelligence and international politics. Westview Press, Inc., 1991, and Cimbala, Stephen J. Artificial Intelligence and National Security. Lexington Books, 1986.
[xli] Booch, Grady. Object oriented analysis & design with application. Pearson Education India, 2006.
[xlii] See Gilbert, Nigel, and Pietro Terna. “How to build and use agent-based models in social science.” Mind & Society 1.1 (2000): 57-72 and Grimm, Volker, et al. “A standard protocol for describing individual-based and agent-based models.” Ecological modelling 198.1 (2006): 115-126.
[xliii] See Epstein, Joshua M. “Modeling civil violence: An agent-based computational approach.” Proceedings of the National Academy of Sciences 99.suppl 3 (2002): 7243-7250, and Epstein, Joshua M. “Modeling civil violence: An agent-based computational approach.” Proceedings of the National Academy of Sciences 99.suppl 3 (2002): 7243-7250.
[xliv] See Beyerchen, Alan. “Clausewitz, nonlinearity, and the unpredictability of war.” International Security (1992): 59-90, and Jervis, Robert. System effects: Complexity in political and social life. Princeton University Press, 1998.
[xlv] Sawyer, Robert Keith. Social emergence: Societies as complex systems. Cambridge University Press, 2005.
[xlvi] Hummon, Norman P., and Thomas J. Fararo. “Actors and networks as objects.” Social Networks 17.1 (1995): 1-26.
[xlvii] Rosen, Stephen Peter. Winning the next war: Innovation and the modern military. Cornell University Press, 1994.
[xlviii] Axelrod, Robert M. The complexity of cooperation: Agent-based models of competition and collaboration. Princeton University Press, 1997.
[xlix] Neumann, Günter. “Programming Languages in Artificial Intelligence.” DFKI 13 (2014): 34.
[l] Abelson, Hal, et al. “The LISP experience.” Annual review of computer science 3.1 (1988): 167-195.
[li] Norvig, Peter. “Design patterns in dynamic programming.” Object World 96.5 (1996).
[lii] See Gordon, Andrew S. Strategy representation: An analysis of planning knowledge. Taylor & Francis, 2004, and Khong, Yuen Foong. Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965. Princeton University Press, 1992.
[liii] See, for example, Imlay, Talbot C., and Monica Duffy Toft, eds. The Fog of Peace and War Planning: Military and Strategic Planning under Uncertainty. Routledge, 2007, Lock, Ray, Matt Uttley, and Paul Lyall. “Honing defence’s intellectual edge.” The RUSI Journal 156.2 (2011): 6-12, Gartner, Scott Sigmund. Strategic assessment in war. Yale University Press, 1999, Steinbruner, John D. The cybernetic theory of decision: New dimensions of political analysis. Princeton University Press, 1974, and Mintz, Alex. Integrating cognitive and rational theories of foreign policy decision making. Macmillan, 2002.
[liv] Mavor, Anne S., and Richard W. Pew, eds. Modeling Human and Organizational Behavior: Application to Military Simulations. National Academies Press, 1998, and Payne, Kenneth. “Fighting on: emotion and conflict termination.” Cambridge Review of International Affairs ahead-of-print (2014): 1-18..
[lv] Kiras, James D. Special Operations and Strategy: From World War II to the War on Terrorism. Routledge, 2006.
[lvi] Butler, Ed. “Setting Ourselves up for a Fall in Afghanistan: Where Does Accountability Lie for Decision-Making in Helmand in 2005–06?” The RUSI Journal 160.1 (2015): 46-57.
[lvii] See some of the case studies of old AI and cognitive science ideas in Norvig, Peter. Paradigms of artificial intelligence programming: case studies in Common LISP. Morgan Kaufmann, 1992 for an example.