Introduction
This article argues that contemporary digital manipulation constitutes a form of informational coercion aimed at constraining the freedom of action of democratic states by shaping political costs, social cohesion, and strategic will. It contends that Western democracies can counter such coercion effectively only through an integrated, campaign-based approach—combining regulation, platform governance, civic resilience, and international cooperation—while preserving democratic legitimacy, which remains their central strategic vulnerability.
Modern warfare rarely begins on the day it is announced. In many cases, it starts much earlier, in places that don’t appear on maps and in conversations that haven’t yet taken place. This was the case in Crimea in 2014, when the incursion of armed men without insignia took the world by surprise, although the real offensive had begun weeks earlier, in another domain: that of perception. The local population was subjected to a constant barrage of messages designed to erode their trust in Kyiv, to inflate pro-Russian sentiment, and to instill the idea of a divided and threatened collective identity. The West observed this with the confusion of someone who recognizes a pattern but can’t yet name it. And as often happens when strategy sheds its skin, it understood what was happening too late. The peninsula fell, Russian forces consolidated their positions, and, in retrospect, it became clear that the military operation had been only the visible phase of a campaign whose first blow had been informational.
This logic resurfaced with greater intensity in 2022. For months, as satellites revealed the buildup of Russian forces around Ukraine, Europe was silently subjected to systematic pressure: destabilizing messages about energy costs, questioning the legitimacy of the Ukrainian government, insinuations that NATO was responsible for the crisis, warnings about the irreversible economic damage that sanctions would cause, and a constant discourse aimed at fracturing cohesion among European states. By the time the armored vehicles finally crossed the border, the continent’s population had already been hit by another kind of fire: one aimed at their willingness to respond.
Defining Informational Coercion in Strategic Terms
Understanding this phenomenon requires starting with the doctrinal definition of strategy as formulated by Military Strategy Magazine: The direction and use—or the threat of use—of force in the service of political objectives.[1] Information, even when it lacks mass and volume, influences that direction.[2] It can intensify the use of force, delay it, justify it, or prevent it. It can shape a government’s perception of the situation, alter the political climate upon which a coalition rests, raise the electoral cost of making a decision, or even generate inaction.
Sun Tzu foresaw, “the supreme excellence consists in breaking the enemy’s resistance without fighting.” In the classical strategic tradition, information is part of “preparing the battlefield” before military force is deployed. Modifying the cognitive environment—perception, morale, cohesion, political calculation—precedes and conditions any operation. This explains why contemporary actors seek to shape will and risk assessment before an open confrontation occurs.
For this reason, the information manipulation relevant to strategists is not that which merely deceives or distorts, but rather that which alters the relationship between ends, ways, and means—Arthur Lykke’s classic triad.[3] When disinformation increases the political cost of action, reduces the diplomatic room for maneuver, or weakens the social cohesion necessary to sustain an operation, it directly interferes with the ways in which a state seeks to achieve its ends. And if it also affects the centers of gravity—political legitimacy, social cohesion,[4] morale, and international credibility—then it ceases to be a communication phenomenon and becomes an instrument of coercion.[5]
The Chinese armed forces have systematized this logic in the concept of cognitive warfare, where the primary objective is not to destroy material capabilities but to alter the adversary’s interpretation of reality: their confidence, cohesion, and strategic judgment. Under this vision, competition begins long before conflict and is geared toward influencing how a population perceives risks, threats, and legitimacy, anticipating the use of physical force.
In 2014, during the annexation of Crimea, this coercion sought to neutralize the Ukrainian government’s capacity to act and reduce Western willingness to intervene. In 2022, the strategy was broadened: it aimed to fracture the European response, anticipate protests, influence perceptions about the energy cost of sanctions, increase social anxiety, and portray Ukrainian resistance as unsustainable. Information, like any weapon, had a specific objective: to influence decisions.
Western strategic theory possesses another fundamental principle for understanding this scenario: the distinction between ends, ways, and means. The end of a democracy, in this context, is to maintain its freedom of action, uphold its institutions, and preserve its capacity to make decisions under pressure. The methods consist of deterring, undermining, exposing, and absorbing hostile campaigns. The means encompass regulation, technological capabilities, civic education, intelligence, and international cooperation.
However, none of these variables is effective unless they are integrated into a campaign. And a campaign—as Clausewitz teaches—requires identifying the enemy’s center of gravity, but also one’s own. In democracies, that center is legitimacy. This means that any response must be proportionate, transparent, and compatible with the rule of law; otherwise, defense becomes internal aggression and erodes what it seeks to protect.
The strategic problem for the West is twofold. On the one hand, it must deny the aggressor any advantages, hindering its ability to use informational dominance as a weapon of political and social control. On the other hand, it must prevent this defense from compromising civil liberties, pluralism, or institutional trust. This dual mission demands an extremely delicate balance between effectiveness and legitimacy.
To measure this balance, clear success criteria are required. The first is the increase in costs for the aggressor, which in Western doctrine is called deterrence by Denial: preventing the hostile operation from achieving its intended effect. The second is maintaining legitimacy, ensuring that all responses adhere to legal frameworks and are subject to parliamentary oversight. The third is the continuity of military and diplomatic operations, meaning that the opposing narrative does not undermine the state’s capacity for action. The fourth is social resilience, understood as the time it takes a society to recover from a hostile campaign.[6]
These doctrinal foundations allow us to organize the analysis: digital manipulation is not an accident of modern communication, but a strategic instrument that, by altering ends, ways, and means, modifies risk and cost calculations, influences political will and reconfigures the balance between force, legitimacy and collective cohesion.[7]
Recent history shows that this is not hypothetical.
It is real. And it operates before, during, and after conventional conflicts.
From Isolated Measures to a Western Strategic Framework
When Europe began to understand that information manipulation was not a scattered phenomenon but an instrument of coercion, it began to examine its institutional responses with new eyes. Until then, each element—regulation, platforms, media literacy, international cooperation—existed as an isolated compartment, more akin to a collection of policies than a strategy. The annexation of Crimea in 2014 and the invasion of 2022 revealed that this approach was insufficient; democracies had to think in terms of campaigns, not isolated measures. It was then that the four lines of Western operation began to be reorganized under a doctrinal logic that connected means and ends within a coherent framework.
The first of these lines of action, though often perceived as bureaucratic or unrelated to military matters, was regulation. Regulation doesn’t appear in classic strategy manuals, but in a digital environment, it becomes an instrument designed to increase the adversary’s cost. Not because bureaucracy is a weapon in itself, but because it establishes a set of conditions that force hostile actors to operate with friction. Risk management, content traceability, mandatory audits, obligatory transparency, and the legal responsibilities of platforms function as obstacles that degrade an actor’s ability to flood the information space with content designed to influence critical decisions.
This link between regulation and coercion became evident after the 2022 Russian invasion. When the European Union restricted the distribution of RT (Russia Today) and Sputnik, it did not do so for editorial or aesthetic reasons.[8] It did so because it understood that these media outlets functioned as operational arms of the Russian state and as mechanisms of narrative saturation designed to manipulate perceptions at a time of high political vulnerability. The decision was superficially interpreted by some as censorship, but in essence, it constituted a strategic maneuver: to deprive the aggressor of a vector that contributed to its coercive campaign.[9] In doctrinal terms, this amounted to a mechanism of negation. The restriction did not seek to impose a narrative, but rather to prevent a hostile actor from conditioning the decision-making space of European democracies.
The Digital Services Act deepened that same approach with even greater sophistication. The initial narrative of the DSA presented it as a regulation of digital markets, but its implementation in 2023 revealed its strategic potential: the Very large online platforms were required to assess systemic risks, document how they mitigate harmful content, reduce artificial amplification, ensure advertising transparency, and submit to scrutiny by the European Commission.[10] These obligations are, in practice, mechanisms designed to increase the adversary’s operating costs. If a hostile actor wants to saturate the information space, it must now do so in an environment where every step leaves a trace, where certain behaviors trigger mitigation obligations, where uncontrolled amplification is met with automated mechanisms that slow it down, and where the most effective channels are under regulatory oversight.[11] [12] This architecture generates friction, and friction, in strategy, is a form of deterrence. But not deterrence through punishment; rather, the kind known as deterrence by denial: to prevent the adversary from obtaining results.
At the same time, the second line of operation—the platforms—acquired a central role in democratic defense. Initially, states treated the platforms as neutral actors providing digital infrastructure. But with the evolution of hostile campaigns, it became clear that this infrastructure was both part of the problem and part of the solution. A recommendation system is not neutral: it determines what the population sees, in what sequence, at what speed, and under what emotional context. When a hostile actor seeks to saturate a debate, it needs platforms to offer expedited channels. When saturation is blocked by measures such as downranking, the slowing down of potentially harmful content, ad tracking, or the verification of political campaigns, the operation loses effectiveness. It is a tactical dispute within a space where speed, volume, and repetition are essential variables.
But for this defense to be legitimate, it cannot rely on spontaneous decisions by private companies. The democratic state has the responsibility to establish the aims, principles, and safeguards. This article makes this clear: these actions must be subordinate to explicit political objectives, regularly audited, integrated within frameworks such as the DSA, and subject to judicial review.[13] Otherwise, they risk eroding the democratic center of gravity. Therefore, the actions of these platforms—even if they have tactical impact—must be understood as a means and never as an end. Ends belong to the political domain, not the algorithmic one.
The third line of operation, far less visible but probably more important in the long run, is civic resilience. Information wars not only seek to weaken political decisions but also to damage the emotional and perceptual structure of societies. To this end, they exploit divisions, polarization, economic anxiety, cultural resentments, and technological uncertainty. In the years following 2014, Eurobarometer surveys recorded a sustained increase in the proportion of Europeans who believed they were regularly exposed to disinformation.[14] This perception is itself a symptom of vulnerability, but also a warning sign. When the population feels it is under attack, democracy must respond by strengthening its cognitive capacity, its ability to detect deception, its critical thinking, and its understanding of how the digital environment works.
Media literacy is not, as it is sometimes presented, a school program without strategic consequences. It is part of the preparation for cognitive theater. Just as in a military campaign, the terrain is prepared, routes analyzed, lines secured, and contingencies anticipated; in the information domain, literacy creates a population less susceptible to emotional saturation, less prone to falling into narrative traps, better prepared to recognize adversarial patterns, and more capable of recovering after a hostile operation.[15] This article uses a concept invaluable for its precision: resilience as recovery time. That is the real strategic metric. How long does it take a society to rebalance its perception after an information attack? The shorter that time, the harder it is for the aggressor to sustain its effects.[16] [17]
The fourth line of operation, cooperation and intelligence, extends defense from the national scale to the international sphere. Digital manipulation knows no borders. A hostile actor can operate from a remote region, use servers in third countries, coordinate from multiple jurisdictions, and disseminate content in different languages to target specific communities. The only way to counter this is with a cooperative network that matches or exceeds the adversary’s mobility. Structures like East StratCom or the NATO StratCom COE exist precisely for this purpose.[18] They enable the detection of common patterns, the attribution of operations, the sharing of analyses, the coordination of fact-based narratives, and the prevention of states acting in isolation.[19] [20] Without that cooperation, each country would react on its own, opening gaps that the adversary could exploit.
These four lines of action—regulation, platforms, resilience, and cooperation—do not operate in isolation. They are cogs in a strategic machine. Each generates particular effects, but they only acquire real value when they function as a campaign. This article emphasizes this point, and rightly so: democratic defense against hostile campaigns must be conceived as an operational structure, not as a collection of uncoordinated measures. It requires continuity, sequence, evaluation, legitimacy, and coherence between means and ends.
Recent evidence confirms this need. The European decision of March 2022 on RT and Sputnik shows how regulatory action can be integrated within a broader state response. The full implementation of the DSA in 2023 demonstrates how regulation can have an operational impact without infringing on democratic freedoms. Official reports reveal campaigns targeting electoral processes, media outlets, and ethnic minorities, with patterns that are repeated in multiple countries and can only be identified and neutralized through multinational cooperation.[21]
None of this is theoretical. It’s all real and verifiable. What seemed like an academic hypothesis a decade ago is now an essential component of geopolitical competition.[22] And just as conventional warfare requires logistics, intelligence, and planning, information defense demands strategic regulation, responsible platforms, a resilient population, and allied cooperation. Recent history demonstrates this: no democratic state can defend itself alone in the information domain.
Risks and Limits of Democratic Information Defense
History demonstrates that every strategy, even the most well-designed, faces structural risks that can slowly erode democratic legitimacy if not anticipated. In the informational arena, these risks are especially acute because they develop in gray areas where the boundaries between security and freedom are fragile and, at times, ambiguous. The first risk, perhaps the most insidious, is the risk of losing legitimacy. A democracy can defend itself against external attacks, but if it does so with methods that appear arbitrary, opaque, or excessive, it risks destroying what it seeks to preserve. The response to digital manipulation can never devolve into covert censorship, nor can it become a mechanism that silences legitimate voices under the pretext of protecting the public. The strength of a democracy lies in its capacity to respond firmly, but with proportionality, clarity, and public accountability. If a measure cannot be openly explained, it cannot be strategically justified.
Alongside the issue of legitimacy, a second risk arises: the constant danger of the adversary’s adaptation. Hostile actors change tactics, migrate between platforms, alter their amplification methods, and modify their targeting strategies to circumvent any barriers imposed upon them. A block can lead to a proliferation of encrypted channels; a transparency measure, to the expulsion of campaigns targeting micro-influencers; a traceability requirement, to the outsourcing of content to intermediaries that are harder to track. Therefore, a democratic response cannot depend on a single instrument. It requires an intelligence system capable of observing patterns, not just isolated units of content; a system that can understand sequences, flows, circuits, amplification rhythms, and tactical shifts. Traceability, when exercised proportionally and under judicial oversight, must serve precisely this purpose: to identify behaviors, never to monitor citizens.
A third risk lies in the realm of strategic epistemology: the confusion between activity and effect. In the information domain, where numbers abound and platforms can produce massive metrics, there is a temptation to evaluate the effectiveness of policies by the amount of content removed, the number of labels applied, or the volume of reports processed.[23] However, these figures do not reveal whether societies have maintained cohesion, whether governments have retained freedom of action, or whether the aggressor has truly been prevented from achieving their aims. Strategy, in its strictest sense, demands that results be measured by real impacts and not by superficial indicators. This is why the doctrinal distinction between Measures of Performance and Measures of the effect is so crucial.[24] The former measures activity; the latter, strategic results. Only the latter allows us to know if a democracy has preserved what matters: operational continuity, social cohesion, recovery time, and political freedom to act without hostile constraints.[25]
There is also a more subtle, but no less relevant, risk: excessive dependence on private actors. When a society’s information architecture relies on platforms that are not subject to direct democratic control, critical decisions can fall into the hands of entities that are not accountable to the public.[26] In some cases, this dependence can lead to regulatory capture, where corporate interests end up shaping the regulatory framework; in others, it can produce algorithmic biases that affect the visibility of certain content or the speed at which relevant information circulates. Only through independent audits, clear transparency obligations, data access agreements for public research, and rigorous legal oversight can this asymmetrical relationship be balanced.
Added to all this is a risk that lurks in any security debate: the temptation to over-securitize. When every public controversy is interpreted as a threat, democracy loses its ability to distinguish between legitimate criticism and hostile action. This confusion can lead to the militarization of public debate, the treatment of cultural differences as geopolitical risks, and the erosion of pluralism in the name of protection. To avoid this, the doctrine demands clear activation thresholds and a philosophy of minimal intervention: acting only when there is concrete evidence of strategic impact.
Operational Architecture for Democratic Information Defense
Once these risks are mapped, the fundamental operational question emerges: how should a democracy be organized to coherently confront these threats? This article offers a rich and profoundly articulated answer. It argues that information defense should adopt a structure similar to that of any operational architecture: one that includes inter-ministerial governance, clear lines of responsibility, specialized teams, mechanisms for continuous evaluation, and close coordination among regulation, technology, diplomacy, defense, and civic education.
Interministerial governance is essential because no single ministry, agency, or sector can encompass the entirety of the problem. Hostile operations cross the domains of security, justice, economic regulation, education, defense, foreign affairs, and digital platforms. Only a structure where all these actors converge can think in terms of campaigning, assign clear responsibilities, and ensure that available instruments are aligned with political objectives. This governance must operate under parliamentary oversight, publish regular reports, and maintain open channels of communication with the public to sustain legitimacy at all times.
The methodological aspect is equally crucial. The distinction between Measures of Performance and Measures of Effect, mentioned earlier as a risk of interpretation, also forms the basis for evaluating any information campaign. Democracies must know which actions truly serve to preserve resilience and freedom of action. A campaign can carry out thousands of interventions, but if none of them alter the adversary’s ability to influence vital decisions, it will have achieved nothing. This is why this article insists on the need to establish clear success criteria and to create strategic analysis units capable of evaluating effects in an integrated manner. Among these teams, so-called red teams play an essential role, as they allow for anticipating vulnerabilities, simulating attacks, testing defenses, and correcting weaknesses before they can be exploited by hostile actors.
These teams are complemented by table-top exercises, which replicate possible scenarios and allow for the evaluation of inter-institutional coordination, response speed, the robustness of protocols, and the coherence of operational mechanisms. In times when the speed of information exceeds the capacity of states to process it, training the collective response is essential. These exercises allow for the detection of weaknesses that are invisible in calm times and strengthen preparedness for moments of crisis.
This article mentions another essential element: data access agreements. In an environment where information flows through private platforms that possess vast amounts of data on social interactions, amplification patterns, and digital behavior, states require legal mechanisms that allow them to understand the space in which they operate. Not to monitor citizens, but to detect threats, anticipate hostile actions, and assess the impact of decisions. Without data access, any defense is, by definition, incomplete. And without legal safeguards protecting people’s rights, such access becomes incompatible with democracy. A balance is only possible through regulatory frameworks like the DSA, external audits, strict protocols, and judicial oversight.
In an accelerated information environment, anticipation becomes key. That’s why this article includes the need for advanced intelligence capabilities fueled by big data analytics and artificial intelligence. These tools, when used within the ethical and legal boundaries of a democracy, allow for the detection of patterns, the attribution of operations, and the anticipation of hostile campaigns before they reach operational maturity. Artificial intelligence can identify amplification anomalies, analyze coordinated behavior, detect sudden shifts in the emotional charge of public discourse, and map networks that act as multipliers of hostile content. Its usefulness lies not in replacing human analysis, but in amplifying its capacity to see.
A strategy, to be comprehensive, must also incorporate mechanisms for continuous learning. No information campaign remains static, and no state can afford to always react to new threats with outdated tools. Doctrine demands a constant cycle of evaluation, correction, adaptation, and updating. Every attack, every failure, every successful response must feed into a learning system that improves defenses.
Primacy of Politics and Democratic Legitimacy
All this operational framework only fulfills its purpose if it serves the primacy of politics. Information cannot become an end in itself. Technologies, legal frameworks, analytical capabilities, cooperation mechanisms, and resilience programs must be subordinated to explicit democratic goals and political leadership that acts responsibly. As this article emphasizes, information defense must protect without distorting, respond without unjustifiably escalating, act without excesses, and safeguard societies’ capacity to debate, dissent, and decide freely.
The conclusion, therefore, is not merely an argumentative recap but a statement of strategic principles. Informational coercion should never be underestimated because it operates on what makes democracy possible: public trust. But neither should it be overestimated, because a poorly conceived defense can damage the very center of gravity it seeks to reinforce. The key lies in understanding that information strategy is part of the overall strategy of the State, and that freedom of political action—that concrete expression of democratic sovereignty—depends on society remaining impervious to hostile attempts to manipulate it from the shadows.
Ultimately, democratic defense against digital manipulation is not about controlling narratives or persecuting dissent, but about preventing an aggressor from transforming the information space into a field of coercion.[27] Democracy is best defended when it remains true to itself, when it acts with strategic discipline, when it assesses real effects, when it recognizes the importance of information control, and when it preserves the primacy of politics over any technical instrument. Information coercion seeks to alter the context in which decisions are made. Democratic strategy, if implemented with moderation, clarity, and legitimacy, can prevent this without sacrificing what defines it: freedom, pluralism, and public accountability.
[1] Military Strategy Magazine, “What is 'Strategic'? The Who, What, When, Where, and Why ofStrategy,” Vol. 9, No. 4 (Summer 2024), https://www.militarystrategymagazine.com/article/whatis-strategic-the-who-what-when-where-and-why-of-strategy/.
[2] US Joint Chiefs of Staff, Joint Publication 3-13: Information Operations (Washington, DC: Joint Staff, 2014; incorporating changes, latest ed.).
[3] Arthur F. Lykke Jr., “Defining Military Strategy,” Military Review 77, no. 1 (January–February 1997): 183–186, https://www.armyupress.army.mil/Portals/7/military-review/Archives/English/75th-Anniversary/75th-Lykke.pdf.
[4] Carl von Clausewitz, On War, ed. and trans. Michael Howard and Peter Paret (Princeton: Princeton University Press, 1976).
[5] Thomas C. Schelling, Arms and Influence (New Haven, CT: Yale University Press, 1966).
[6] European Union Agency for Cybersecurity (ENISA), Threat Landscape series (selected years)on information manipulation vectors, https://www.enisa.europa.eu/topics/csirt-cert-services/.
[7] Colin S. Gray, The Strategy Bridge: Theory for Practice (Oxford: Oxford University Press,2010).
[8] Council Regulation (EU) 2022/350 of 1 March 2022 amending Regulation (EU) No 833/2014,OJ L 65 (March 2, 2022): 1–4, https://eur-lex.europa.eu/eli/reg/2022/350/oj/eng.
[9] Council Decision (CFSP) 2022/351 of 1 March 2022 amending Decision 2014/512/CFSPconcerning restrictive measures in view of Russia's actions destabilizing the situation in Ukraine,Official Journal of the European Union L 65 (March 2, 2022): 5–7, https://eur-lex.europa.eu/eli/dec/2022/351/ oj / eng.
[10] European Commission, “Digital Services Act – Very large online platforms and searchengines (VLOPs/VLOSE)” and “DSA enforcement framework,” accessed September 18, 2025,https://digital-strategy.ec.europa.eu/en/policies/dsa-vlops; https://digital-strategy.ec.europa.eu/in /policies/dsa -enforcement.
[11] James P. Farwell, Information Warfare: Forging Communication Strategies for Twenty-FirstCentury Operational Environments (Quantico, VA: Marine Corps University Press, 2020).
[12] European Commission, Strengthened Code of Practice on Disinformation (2022), https://digital-strategy.ec.europa.eu/en/policies/code-practice-disinformation.
[13] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October2022 on a Single Market for Digital Services (Digital Services Act), OJ L 277 (October 27, 2022): 1–102, https://eur-lex.europa.eu/eli/reg/2022/2065/oj/eng.
[14] European Commission, Standard Eurobarometer 101 (Spring 2024): media anddisinformation indicators, https://www.gesis.org/en/eurobarometer-data-service/data-anddocumentation/standard-special-eb/study-overview/eurobarometer-1014-za8844-aprilmay-2024.
[15] European Commission, Flash Eurobarometer 464: Fake News and Disinformation Online(February 2018), doi:10.2759/559993, https://europa.eu/eurobarometer/surveys/detail/2183.
[16] Christopher Paul and Miriam Matthews, “The Russian 'Firehose of Falsehood' Propaganda Model,” RAND Perspective PE-198-OSD (2016), https://www.rand.org/pubs/perspectives/PE198.html.
[17] Kate Starbird, “Disinformation's spread: bots, trolls and all of us,” Nature 571 (2019): 449,https://doi.org/10.1038/d41586-019-02235-x.
[18] EUvsDisinfo , East StratCom Task Force (EEAS), accessed September 18, 2025, https://euvsdisinfo.eu/.
[19] NATO Strategic Communications Center of Excellence (StratCom COE), official site, accessed September 18, 2025, https://stratcomcoe.org/.
[20] Military Strategy Magazine, “What is Strategy,” MSM Brief, accessed September 18, 2025, https://www.militarystrategymagazine.com/briefs/what-is-strategy/.
[21] Samantha Bradshaw and Philip N. Howard, “The Global Disinformation Order: 2019 GlobalInventory of Organized Social Media Manipulation,” Oxford Internet Institute Working Paper2019.3, https://comprop.oii.ox.ac.uk/research/.
[22] Yochai Benkler, Robert Faris, and Hal Roberts, Network Propaganda: Manipulation,Disinformation, and Radicalization in American Politics (New York: Oxford University Press, 2018).
[23] Organization for Economic Co‑operation and Development (OECD), CombattingMisinformation and Disinformation: An Implementation Framework for Public Communication(Paris: OECD, 2023), https://www.oecd.org/gov/comms/.
[24] Michael Mazarr et al., Hostile Social Manipulation: Present Realities and Emerging Trends(Santa Monica, CA: RAND Corporation, 2019), https://www.rand.org/pubs/research_reports/RR2713.html.
[25] Christopher Paul and Miriam Matthews, “The Russian 'Firehose of Falsehood' PropagandaModel ,” RAND Perspective PE-198-OSD (2016), https://www.rand.org/pubs/perspectives/PE198.html.
[26] European Court of Human Rights (ECtHR), Editorial Board of Pravoye Delo and Shtekel v.Ukraine, no. 33014/05, Judgment of May 5, 2011 (on journalistic links and online speech).
[27] Gonzalo Javier Rubio Piñeiro, Capabilities of the Russian Intelligence System: The Case of Crimea's Annexation by Russia. Among active measures, little green men, special forces, cyberactivistsand spies (Buenos Aires: Autores de Argentina, 2021), EPUB, ISBN 978-987-87-1815-6

