At the second annual Catalyst Summit, held on November 12 in Brampton, Ontario, keynote speaker Elizabeth Anderson, a Fulbright scholar and fellow at the Montreal Institute for Global Security, presented a sobering reality for cybersecurity professionals.
For her, while modern wars are still fought on the battlefield and in the markets, they are won and lost in the hearts and minds of citizens. Our adversaries are targeting the cognitive domain: the deliberate manipulation of citizens’ perception of reality to subvert societal cohesion and political will.
This is not new. What has changed is the technology: digital platforms and AI have supercharged these tactics. Adversaries can now engineer opinion at scale, fragmenting attention, and eroding trust in truth itself.
Many of these issues fall within the cybersecurity domain, and will be part of how our industry safeguards Canadian interests going forward. Just as we have taken steps to defend network infrastructure, Anderson believes that we must also defend the information infrastructure upon which democratic decision-making depends.
Cognitive warfare works to diminish citizens’ capacity to understand reality, trust institutions, and make informed decisions. This happens through platform manipulation, algorithmic amplification, and the systematic exploitation of existing societal divisions.
The objective, then, is not merely to persuade, but to degrade our shared understanding of the world around us. The goal is not to convince someone of the merits of a certain view — it is to make everyone lose trust in the concept of truth itself.
If democracies depend on the individual’s freedom to choose their own future, our adversaries have perfected ways to manipulate us to work against our own best interests. As Anderson put it: “It should be Canadians — not the Kremlin, not the Chinese Communist Party — deciding our future.”
As cybersecurity professionals, this puts us on the front line of a new cold war that is already underway, as democratic norms are undercut in nations around the world, often with the support of pluralities of citizens.
What can we do to push back against cognitive intrusion: the manipulation of our information environment as a strategic weapon?
First, we must all make cognitive warfare the “red car” in our lives. This borrows from what is called Red Car Theory, which holds that once something is noticed (for example, a red car), one starts noticing it everywhere (“Why are there so many red cars on the road?”).
If we make a point of being aware of information operations and the manipulation of our information environment, then we will be more likely to notice and react to it.
When we encounter narratives that work against Canada’s interests, being parroted by employers, organizations, and other citizens, it is essential to understand where this information originates. Does it actually make sense? How did the idea spread?
Our politicians do not yet realize the gravity of cognitive warfare. Our professional responsibility is to raise awareness, share expertise, and advocate for the institutional capacity to detect, and the political courage to respond to, these operations.
Second, we need to understand how cognitive warfare operates. As Anderson outlined, we should be thinking about:
- Message: what resonates. Adversaries exploit real grievances about institutional failures, historical wounds, and identity divisions to drive disenfranchisement and disengagement.
- Messenger: Who delivers the message? Platform algorithms function as the distribution infrastructure. Like any infrastructure, these can be compromised or exploited. Most users assume that algorithms are neutral recommendation engines, but they can easily be manipulated to drive specific results.
- Medium: How does it spread? These platforms are engineered systems, and understanding their design choices is key to understanding the attack surface. Social platforms prioritize engagement over accuracy, amplifying volatile content by design, valuing profit over societal cohesion and democratic values.
- Audience: Who is targeted? Threat actors conduct reconnaissance to identify vulnerable populations, just as they would scan for vulnerable endpoints in data systems. Diaspora communities facing transnational repression and digital-native young people who lack the understanding to recognize manipulation are primary targets.
Third, we cannot wait for governments to establish the framework for addressing these problems. Legislation moves slowly, and as we in this industry well know, information technology moves faster than we can keep up with.
Finally, we must be sensitive to the fact that concerns around how bad-faith actors might manipulate information will invite questions about where the line falls between defence and censorship. This tension is real and must be navigated carefully.
We must be clear that we are not working against citizens’ rights to hold particular beliefs. We are working to ensure that their sources of information have not been unfairly manipulated by foreign state actors seeking to degrade democratic governance, societal cohesion, and political will to resist.
So many of our efforts in the cybersecurity industry are around preventing just these sorts of attacks and ensuring that data, private information, and public information remain unaffected by bad-faith action and criminal activity.
Defence against bad actors requires constant vigilance against sophisticated adversaries. The cognitive domain is now part of your operational environment. Cognitive warfare works in darkness, in complacency, in the gap between the war being waged and our recognition of it.
But once we see it, we can defend against it. The question is not whether we will encounter these threats. It is whether we will recognize them early, and defend against them as an industry.
To stay informed on the latest topics in cybersecurity and digital resilience, subscribe to the Catalyst newsletter.
About Elizabeth Anderson
Elizabeth Anderson is a Fellow at the Montreal Institute for Global Security. She currently serves as a Research Director for the Konrad-Adenauer-Stiftung (Canada), leading research on digital autocracy and democratic resilience. She was selected for the New Security Leaders program of the 2025 Warsaw Security Forum. She served as a Fulbright Scholar (2024–25) at the Center for a New American Security, where she was a visiting associate fellow with the Transatlantic Security Program.
Previously, she served as Director of Operations and Senior Advisor for International Security to Canada’s Minister of Foreign Affairs. She was one of the leads on Canada’s policy response to Russia’s full-scale invasion of Ukraine, coordinating with allied governments during critical periods of the war, including in the lead up to the invasion. She oversaw military export controls valued at C$3.5 billion annually and directed operations for high-stakes consular cases.
Her research examines how authoritarian regimes employ grey-zone warfare, with a particular emphasis on the cognitive domain, to subvert democratic cohesion and political will. She is particularly interested in how democracies can work with like-minded partners to protect liberal values and advance shared prosperity.
She holds a Master of Arts in Global Risk from Johns Hopkins University’s School of Advanced International Studies and a Bachelor of Arts (Honours) in Political Studies from Queen’s University.