The field of counter-disinformation is cluttered with interventions, definitions, and approaches. To make sense of it all, we need to start with the basics: what actually is disinformation, and how do governments justify intervening against it?
This post establishes the foundational framework that underpins most official counter-disinformation efforts.
The Official Definition
The European Commission defines disinformation as:
"Demonstrably false or misleading information created, presented and disseminated for economic gain or to deliberately deceive the public, capable of causing public harm. Public harm includes threats to democratic processes and public goods like citizen health, environment, and safety."
Reporting errors, satire, parody, and clearly partisan news are explicitly excluded.
This definition embeds several assumptions worth unpacking:
- Verifiability - The information must be demonstrably false, beyond simple journalistic error or bias
- Attribution - Identifiable actors create, present, and spread this information
- Motive - These actors seek profit or deliberate deception
- Harm - The information threatens democracy, public welfare, or the environment
Why Governments Intervene
Disinformation often constitutes legal behavior. So how do governments justify action?
By establishing that disinformation causes damage to democratic processes and public discourse, intervention becomes legitimate. The Dutch government emphasizes that disinformation aims "to damage public debate, democratic processes, the open knowledge economy, or public health."
The reasoning: disinformation exists outside normal democratic freedoms because it either is criminal or produces illegal consequences. This distinction allows counter-disinformation efforts without violating fundamental rights.
The Pollution Metaphor
What made government intervention politically viable was a compelling narrative: digital pollution.
A representative 2019 quote captures the framing: "Society found ways to manage industrial waste. We must do the same for the internet."
UNESCO, the Council of Europe, the OECD, and numerous academics adopted "information pollution" terminology. This metaphor treats information streams as contaminated by "dirt" - and contamination justifies cleanup.
The Process Chain
Treating disinformation as pollution enables describing an intervention chain.
For traditional industrial polluters: organizations emit pollutants, which spread through environments, accumulate in multiple locations, and expose people to contaminants with measurable consequences.
For disinformation, the chain operates identically:
| Stage | Industrial Pollution | Disinformation |
|---|---|---|
| Mission | Corporate profit motive | Hostile organizations create false information for gain or deception |
| Emission | Factory releases pollutants | Troll farms and spin doctors produce content |
| Transmission | Pollutants spread through air/water | Own media channels and "useful idiots" spread content |
| Remission | Pollutants accumulate in environment | Social media echo chambers perpetuate content |
| Inmission | People exposed to contaminants | Personalized messages reach individuals |
| Commission | Health and environmental damage | Threats to democratic processes and public goods |
Why This Framework Matters
This "Standard Disinformation Model" - combining official definition, embedded assumptions, and pollution metaphor - shapes how most interventions are designed and justified.
Understanding the model reveals:
- Where interventions target - Each stage of the chain offers different intervention points
- What assumptions are baked in - Including beliefs about mass persuasion and identifiable adversaries
- Why certain approaches dominate - The framework privileges supply-side interventions (stopping the source) over demand-side ones (building resilience)
In subsequent posts, we'll examine how different generations of interventions map onto this chain - and where the model's assumptions break down.
This post is adapted from a series originally published in Dutch on Frankwatching (2023).
Want to learn more about the DIM Model?
Explore our complete framework for understanding counter-disinformation interventions.