• rcr icon

    Accelerated Cybersecurity Training Program

    Catalyst Cyber Accelerator

    Corporate Training

    Cyber Talent Acquisition

    Mastercard Emerging Leaders Cyber Initiative

    Catalyst Fellowship Program​

Ideas

Move Fast, Break Everything: The Risk Landscape of A.I.

David Whyte, the CTO of Tidal Point Software, joined us at the second annual Catalyst Summit, held this past November 12 in Brampton, where he walked workshop attendees through the evolving threat landscape that the advent of AI has brought to cybersecurity.

This felt particularly relevant because, in the year since the first Summit in 2024, AI technologies have evolved from an emerging talking point — something quickly approaching, which would change the industry in manifold ways — to something that is now ubiquitous and integrated into every tool we use.

“These features are coming to us whether we want them or not,” Whyte said, pointing to the fact that even Google’s search has been updated to prioritize AI-driven summaries. There was a 900% increase in traffic to generative AI websites in just 2024 alone.

To Whyte, AI presents a new attack surface in secure digital technologies, thereby creating new threat vectors that cybersecurity professionals must remain aware of.

Traditional technology risk factors unfold as expected. AI technologies, on the other hand, propagate more rapidly and less transparently. They rely on interconnected systems and are dependent on unique vendors, which also increases exposure. 

Finally, generative AI tools are vulnerable to hidden biases that stem from their training data and optimization objectives. These biases are then amplified on a larger scale.

In terms of cybersecurity, one area where AI technology is unusually vulnerable is its concentration risks: the technology is threatened by supply chain issues related to chips and processing power, and by its dependence on international cloud suppliers who are themselves vulnerable to attack. 

When Google goes down, everyone feels it; now, when smart system-enabled supply chains are attacked, or cloud providers are hacked, the network impacts on the AI tools, which depend on those pipelines, are far beyond the control of the organizations using them.

Another risk inherent to this concentration: the AI industry becomes dependent on a few large suppliers, giving those suppliers undue bargaining power over the use of client data. If a client organization does not like what those suppliers do with their data, does it have an alternative? We’ve yet to reach a market scale where safe purchasing is genuinely possible.

The vulnerability of training data should also be considered, especially because after-the-fact validation of an LLM’s responses can be so tricky. Unless you’re an expert in the domain you are querying with AI, the answer generated might sound good to you, even if it is wrong. Engineers need to do significant work to ensure the contextual response is actually correct.

Training data itself, then, becomes a vector for cyberattack through the injection of biases, disinformation, and misinformation, which may pass unseen without rigorous verification by the engineering team.

Then there are the obvious threats of AI in cyber attacks themselves. Traditional attack models can now be enhanced and amplified, allowing for extremely precise phishing at scale, for example. 

Whyte proposes that the best use case for AI in the cybersecurity industry is to bolster traditional operations by leveraging AI to triage alerts in the SOC. This makes AI a force multiplier for security teams. 

Generative AI can also help SOC managers with one of the oldest communication hurdles in interactive technologies: the one where business leaders “don’t speak tech.” Simple generative AI tools can be used to create verbiage that can be shared across the technical and executive layers of a business or organization, accelerating decision-making and strategic understanding.

In his closing commentary, Whyte advised that the “shiny new technology” aura of AI is, itself, a cyber risk. As with implementing any technology, proper governance is key, ensuring that AI is being added to an organization’s tool set in use cases that actually provide value, rather than mere glamour.

Trustworthy AI, according to Whyte, requires observability. Is the AI working as intended and fit for purpose? 

For cybersecurity professionals, trustworthy AI also requires explainability from the teams implementing it: a clear line of sight into why and how the AI generates its outputs, and a methodology to combat hallucinations and bias. 

Otherwise, AI rapidly risks becoming a “black box” technology for client organizations that use it, leaving them vulnerable to entirely new attack risks in a landscape that is evolving as quickly as machines can learn.

About David Whyte 

Dr. David Whyte is the Co-Founder and Chief Technical Officer (CTO) of Tidal Point Software, where he drives the development of solutions that integrate AI, privacy, and security to enhance cyber resilience.

He is also a Learning Facilitator for MIT Professional Education’s Applied Generative AI for Digital Transformation course and serves as the Head of Cyber Security for Gen AI Global, an AI-powered professional network created in collaboration with MIT Professional Education to accelerate the adoption of generative AI across industry.

Previously, Dr. Whyte was Head of Corporate Security and the creator of the Cyber Resilience Coordination Centre (CRCC) at the Bank for International Settlements (BIS) in Basel, Switzerland, where he oversaw global IT and physical security, incident response, and international cyber resilience coordination. Prior to that, he served as Technical Director of Cyber Defence at Canada’s Communications Security Establishment (CSE), leading the development of next-generation cyber threat detection services for the Government of Canada. He holds a PhD in Computer Science from Carleton University and a Master’s degree in System Design and Management from the Massachusetts Institute of Technology (MIT).

More from the Catalyst

Fill out the form below to subscribe to The Catalyst Connect newsletter and stay in the know:

Contact Us

*By clicking submit, you consent to receive emails from Rogers Cybersecure Catalyst.

Fill out the form below to subscribe to The Catalyst Connect newsletter and stay in the know:

*By clicking submit, you consent to receive emails from Rogers Cybersecure Catalyst.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut elit tellus, luctus nec ullamcorper mattis, pulvinar dapibus leo.