
How to analyze an influence operation: methodology
SITUATION ASSESSMENT In February 2022, the Stanford Internet Observatory documented a sophisticated coordinated inauthentic behavior campaign targeting European…
\n\n
Theory is essential, but nothing illuminates the mechanics of cognitive warfare like detailed case studies. Real-world operations reveal tactics, techniques, and procedures (TTPs) that remain abstract in doctrinal documents. They expose vulnerabilities, demonstrate consequences, and provide empirical evidence for defensive countermeasures. Adversaries study past operations to improve future ones. Defenders who fail to learn from documented cases are condemned to be surprised by their evolution.
This article examines several of the most significant documented cognitive warfare operations, extracting lessons for defense professionals, intelligence analysts, and policymakers.
Case studies in cognitive warfare serve multiple purposes:
Threat intelligence: Identify adversary TTPs, infrastructure, and targeting preferences
Pattern recognition: Detect common tactics across seemingly disparate operations
Vulnerability identification: Reveal which populations, platforms, or decision processes are most susceptible
Countermeasure development: Test defensive responses against real-world operations
Training and education: Provide concrete examples for cognitive defense training
Attribution and deterrence: Build legal and evidentiary foundations for response
The Internet Research Agency (IRA), a St. Petersburg-based «troll farm» with ties to Russian intelligence, conducted a multi-year operation to influence the 2016 US presidential election. The operation was documented through congressional investigations (Senate Intelligence Committee, House Intelligence Committee), Special Counsel Robert Mueller’s investigation, academic research, and investigative journalism.
Personnel: Hundreds of paid «trolls» working in shifts
Budget: Approximately $1.25 million per month (estimated)
Accounts: Thousands of fake accounts across Facebook, Twitter, Instagram, YouTube, Tumblr, and Reddit
Content: Over 80,000 Facebook posts reaching 126 million users; over 3,000 Twitter accounts; 43,000 Instagram posts
Targeting: Swing states (Michigan, Wisconsin, Pennsylvania, Florida); divided audiences (African American, conservative white, Latino, Muslim)
| Tactic | Description |
|---|---|
| Coordinated inauthentic behavior (CIB) | Networks of fake accounts operating in coordination |
| Divided audiences | Different content for different demographic groups; pro-Black Lives Matter content to African Americans; anti-immigrant content to white conservatives |
| Real-world events | Organized rallies both supporting and opposing candidates — sometimes simultaneously |
| Hacked-and-leaked | GRU cyber intrusions into DNC and Clinton campaign emails; released through WikiLeaks and DCLeaks |
| Bot amplification | Automated accounts inflating engagement metrics |
| Microtargeting | Platform ad tools targeting specific demographics and geographies |
| Influence on journalists | Fake accounts engaging with reporters, pitching stories |
| Suppression campaigns | Discouraging voters in specific demographics (e.g., «vote by text» disinformation targeting African Americans) |
Direct vote impact: Debated; methodological challenges in measuring counterfactual
Social division: Significant and measurable increase in polarization, animosity, and distrust
Trust erosion: Long-term decline in confidence in election integrity, media, and democratic institutions
Normalization: Established disinformation as a permanent feature of US political discourse
Slow detection: Platforms and government were slow to identify the operation
Reactive response: Countermeasures deployed after significant impact
Attribution challenges: Public attribution took over a year
Platform vulnerability: Ad systems and algorithmic amplification were exploited
Societal vulnerability: Pre-existing social divisions were weaponized
Pre-election inoculation: Pre-bunking and trusted messenger networks are essential
Platform reform: Ad transparency, authentication requirements, and algorithmic changes needed
Rapid attribution: Intelligence community must be prepared to attribute and communicate quickly
Cross-platform information sharing: Platforms and government must share threat intelligence
Societal resilience: Media literacy and institutional trust are the ultimate defense
In the lead-up to Russia’s full-scale invasion of Ukraine, Russian state media and disinformation networks spread claims that Ukraine operated US-funded bioweapons laboratories. The narrative was laundered through alternative media, amplified by bot networks, and briefly echoed by some Western politicians.
Pre-invasion (late 2021 – early 2022) : Initial seeding through Russian state media (RT, Sputnik)
Invasion period (February-March 2022) : Massive amplification; narrative cited as partial justification for invasion
Post-invasion: Narrative persists; weaponized by anti-vaccine and anti-government movements globally
| Tactic | Application |
|---|---|
| Information laundering | Russian state media claims repackaged by alternative news sites, then amplified by bots, then covered by mainstream media as «controversy» |
| Exploitation of legitimate programs | US-funded biological threat reduction programs in Ukraine (defensive, transparent) provided grain of truth |
| Emotional framing | «American bioweapons near Russian borders» exploited fears of biological warfare |
| Deniable amplification | Official Russian government statements cited anonymous «documents» and «experts» |
Justification for invasion: Provided partial pretext for military action
Long-term narrative persistence: Biolab conspiracy theories continue circulating, undermining trust in public health
Global reach: Narrative spread to anti-Western and anti-vaccine communities worldwide
Policy complication: Complicated international cooperation on biological threat reduction
Rapid debunking: US and Ukrainian governments publicly denied claims; provided documentation of legitimate programs
Fact-checking: Independent organizations quickly identified disinformation
Pre-existing relationships: Trusted messengers (scientific bodies, public health officials) countered narrative
Pre-bunking is essential: Once a narrative spreads, debunking is slow and often ineffective
Trusted messengers matter: Government statements alone are insufficient; independent credible voices are essential
Grain of truth is exploited: Adversaries weaponize legitimate programs; defensive communication must anticipate this
Narrative persistence: Disinformation does not disappear when debunked; long-term monitoring and counter-narratives are required
Researchers have documented a massive Chinese influence operation dubbed «Spamouflage» — networks of fake social media accounts promoting Chinese government narratives and attacking critics. The operation spans multiple platforms and has evolved significantly in sophistication.
Accounts: Hundreds of thousands of fake accounts across Twitter, Facebook, Reddit, Medium, Quora, and other platforms
Content: Pro-China narratives on Xinjiang, Hong Kong, Taiwan, COVID-19 origins, Belt and Road Initiative, US-China relations
Tactics: Copy-pasted identical comments, coordinated hashtag campaigns, harassment of journalists and activists, fake «grassroots» supporters
Evolution: Increasing sophistication over time; use of AI-generated profile photos; integration of video and multimedia; more natural language patterns
| Dimension | Russian (IRA) | Chinese (Spamouflage) |
|---|---|---|
| Primary target | US and European domestic audiences | Global South, diaspora communities, international institutions |
| Narrative focus | Polarization, division, election interference | Positive promotion of China; attack on critics |
| Tactics | Microtargeting, real-world events, hacked-and-leaked | Bulk amplification, harassment, inauthentic engagement |
| Effectiveness | Higher penetration in Western audiences | Lower penetration in West; significant in Global South |
Limited Western penetration: Less effective than Russian operations in reaching mainstream Western audiences
Significant Global South reach: Effective in Africa, Latin America, Southeast Asia
Institutional influence: Shaping narratives in UN, WHO, and other international bodies
Harassment: Effective silencing of some journalists and activists
Platform enforcement: Gradual removal of accounts; challenges in distinguishing state-sponsored from organic
Academic research: Extensive documentation and public reporting
Journalistic investigation: Exposés identifying network infrastructure and tactics
Global South vulnerability: Information defense must be global; Western-focused countermeasures leave gaps
Attribution is possible: Technical forensics can identify coordinated inauthentic behavior
Platforms need improvement: Current enforcement is slow and reactive; proactive detection needed
Harassment is a tactic: Silencing critics is an explicit objective; defense requires protecting vulnerable voices
QAnon is a decentralized conspiracy movement originating on 4chan in 2017. «Q» (an anonymous figure claiming military intelligence credentials) posted cryptic messages («Q drops») about a secret war against a global cabal of Satanic, cannibalistic pedophiles. Donald Trump was allegedly fighting this cabal. The movement grew from fringe forums to mainstream political discourse.
| Phase | Period | Characteristics |
|---|---|---|
| Origins | 2017-2018 | 4chan and 8chan posts; niche community decoding «drops» |
| Growth | 2018-2019 | Spread to Facebook, Twitter, YouTube; mainstream media coverage |
| Mainstreaming | 2019-2020 | Political figures reference QAnon; QAnon candidates for Congress |
| Violence | 2020-2021 | Participation in Capitol attack; kidnap plots; threats |
| Decentralization | 2021-present | Platform bans fragment movement; migration to alternative platforms; narrative evolution |
| Tactic | Application |
|---|---|
| Gamification | Decoding «drops» created sense of investigation and discovery |
| Community bonding | Shared secret knowledge created strong in-group identity |
| Phased disclosure | Gradual revelation of extreme claims; initial attraction through anti-pedophilia framing |
| Self-sealing logic | Lack of evidence explained as conspiracy hiding truth; failed predictions reinterpreted |
| Algorithmic amplification | High-engagement content promoted by platforms |
| Mainstream laundering | Media coverage of «QAnon phenomenon» spread the movement |
Real-world violence: January 6th Capitol attack; attempted kidnapping of Michigan Governor Gretchen Whitmer; murders; threats against public officials
Political influence: QAnon-sympathetic candidates for Congress; normalization of conspiracy thinking in mainstream politics
Social fragmentation: Family and friendship breakdowns over QAnon beliefs
Trust erosion: Distrust in elections, media, science, and government
Platform bans: Removal of QAnon content and accounts (Twitter, Facebook, YouTube, Reddit)
Migration to alternative platforms: Movement to Gab, Telegram, and other less-moderated spaces
Counter-narratives: Former QAnon believers sharing exit stories; pre-bunking campaigns
Exit programs: Counseling and support for individuals leaving the movement
Decentralized movements are resilient: No central leader or infrastructure to target
Identity-based beliefs resist factual correction: QAnon is an identity, not just a set of beliefs
Address underlying needs: Belonging, significance, certainty — same psychological needs as cults
Platform bans displace but do not eliminate: Content moderation is necessary but insufficient
Pre-bunking is more effective than debunking: Inoculation before exposure
| TTP | Russia (2016) | Russia (Biolabs) | China (Spamouflage) | QAnon (organic) |
|---|---|---|---|---|
| Fake accounts | Yes | Yes | Yes | Limited |
| Bot amplification | Yes | Yes | Yes | Limited |
| Information laundering | Yes | Yes | Limited | Yes |
| Divided audiences | Yes | No | Limited | Yes |
| Real-world events | Yes | No | No | Yes (Capitol) |
| Hacked-and-leaked | Yes | No | No | No |
| Emotional framing | Yes | Yes | Yes | Yes |
| Exploiting existing divisions | Yes | Limited | Yes | Yes |
Operations that were mitigated or defeated share common defensive characteristics:
Rapid detection and attribution: Shortening the window between operation launch and public identification
Platform cooperation: Social media companies removing inauthentic accounts quickly
Legal frameworks: Laws limiting election-period disinformation (France model)
Media literacy: Populations trained to recognize manipulation tactics
Trusted institutions: Credible counter-messengers (elections officials, public health authorities)
International coordination: Information sharing among affected nations
Operations that succeeded share common defensive failures:
Slow detection: Platforms and government were slow to identify the operation
Reactive response: Countermeasures deployed after significant impact
Attribution challenges: Public attribution took too long
Platform vulnerability: Ad systems and algorithmic amplification were exploited
Societal vulnerability: Pre-existing social divisions were weaponized
Institutional distrust: Populations already distrustful of government and media were more vulnerable
Attributing cognitive warfare operations is more difficult than attributing cyber attacks. Challenges include:
Plausible deniability: Adversaries structure operations to avoid direct attribution
Use of proxies: Front organizations, cutouts, and unwitting amplifiers
Technical artifacts: VPNs, compromised infrastructure, fake identities
False flags: Adversaries may impersonate other adversaries
Best practices: Multiple independent lines of evidence (technical, human intelligence, behavioral, financial); confidence levels (low/medium/high); public attribution only when confidence is high.
Assessing impact of cognitive warfare operations is methodologically difficult:
Counterfactual impossibility: Cannot know what would have happened without the operation
Attribution of outcomes: Disentangling influence operations from other causal factors
Long-term effects: Some effects manifest years later
Unintended consequences: Operations may backfire
Best practices: Multiple metrics (engagement, belief change, behavioral change, trust measures); longitudinal studies; comparison with control populations where possible.
Case studies are the empirical foundation of cognitive warfare defense. From Russian election interference to Chinese influence networks to the QAnon movement, documented operations reveal how adversaries think, what tools they use, and which vulnerabilities they exploit. They demonstrate that disinformation works — not always in achieving specific outcomes, but reliably in eroding trust, exacerbating divisions, and creating information fog.
Cross-case pattern analysis reveals common TTPs across seemingly disparate operations: fake accounts, bot amplification, information laundering, emotional framing, and exploitation of existing social divisions. Defensive success requires rapid detection, platform cooperation, legal frameworks, media literacy, trusted institutions, and international coordination.
For defense professionals, studying case studies is not academic. It is operational preparation. Adversaries study past operations to improve future ones. Defenders who do the same will be perpetually behind. Those who learn systematically — extracting TTPs, identifying patterns, developing countermeasures — can anticipate, detect, and mitigate before the next operation achieves its objectives.
The cognitive battlefield is not new. But the documented operations of the past decade have revealed its contours with unprecedented clarity. The question is whether defenders will learn from them.

SITUATION ASSESSMENT In February 2022, the Stanford Internet Observatory documented a sophisticated coordinated inauthentic behavior campaign targeting European…
SITUATION ASSESSMENT In February 2022, the Stanford Internet Observatory documented a sophisticated coordinated inauthentic behavior campaign targeting European audiences during the early weeks of Russia’s...
Weekly intelligence briefings on cognitive warfare, disinformation, and defense strategies.