The Next Great Cybersecurity Threat: Agentic AI

The Next Great Cybersecurity Threat: Agentic AI

By   |  5 min read  | 

Make no mistake about it, agentic AI will be an important security concern for companies — both large and small — over the next several years. This isn’t a distant forecast but a quickly materializing reality. The capabilities that make these systems — AI entities that can perceive, reason, decide and act autonomously — so revolutionary also create profound security challenges. We have moved beyond AI as a mere tool because it is evolving into an active, often unpredictable participant in our digital and physical worlds.

Agentic Shift: More Than Just New Tools, a New Threat Paradigm

The advent of generative AI (GenAI) has fundamentally altered the operational landscape. We are witnessing an ongoing cascade of advances causing development timelines to collapse, relentlessly bulldozing old benchmarks. For cybersecurity, this means traditional models, largely built around human-driven attack patterns and established defenses, will become insufficient.

Agentic AI introduces threats that are different in kind, not merely in degree. Imagine malware that requires no command and control (C2) infrastructure because the agent is the C2, capable of autonomous decision-making and evolution. Consider AI-powered botnets that don’t just execute preprogrammed attacks but can collude, strategize and adapt in real time. 

One day, we will face AI agents that autonomously generate novel exploits. These agents will conduct hyperpersonalized deepfake social engineering at scale and leverage advanced techniques as they learn to bypass defenses to achieve near undetectability. The nature of the “most likely attack path” changes when the attacker’s risk and operational values are those of AI, rather than human.

Three Fault Lines in Our AI Defenses

The insights gathered from cybersecurity and AI experts at a recent Agentic AI Security Workshop paint a stark picture. While agentic systems are being embedded in countless locations — from company workflows to critical infrastructure — our collective ability to govern and secure them lags dangerously. This gap creates a crisis defined by three critical fault lines in our current approach.

  1. The Supply Chain and Integrity Gap: We are building on foundations we cannot fully trust. Pressing questions remain about the integrity of the AI supply chain. How can we verify the provenance of a model or its training data? What assures us that an agent hasn’t been subtly poisoned during its development? 

    This risk of a “digital Trojan horse” is compounded by the persistent opacity of many AI systems. Their lack of explainability critically hinders our ability to conduct effective forensics or robust risk assessments.

  2. The Governance and Standards Gap: Our rules and benchmarks are dangerously outdated. Many regulations and governance frameworks crafted for the pre-AI era are only now beginning to address emerging policy concerns like accountability or liability for AI-caused harm.

    Furthermore, the digital landscape lacks a common yardstick for AI security. There is no equivalent of an ISO 27001 certification, making it extraordinarily difficult to establish baselines for trust. If a major AI-specific incident occurs, we possess no “AI-CERT,” that is, no specialized international body ready to orchestrate a response to attacks that will look nothing like what has come before.

  3. The Collaboration Gap: The experts needed to solve this problem are not speaking the same language. A deep chasm exists between the minds in AI research and cybersecurity professionals. It’s a mutual blind spot that hampers the development of holistic solutions. This fragmentation is mirrored on the global stage. AI threats respect no borders, yet the international cooperation required for sharing AI-specific intelligence and establishing widely accepted protocols remains more nascent than operational,1 leaving our collective defense dangerously siloed.

New Blueprint for a Secure Agentic Future

The scale of this challenge demands a fundamental, collaborative effort across the entire ecosystem. The concerns outlined here are meant to catalyze action, not to induce fear. We must learn from past technological revolutions. We must embed security, ethics and governance into the fabric of agentic AI from this crucial early stage, rather than attempting to bolt them on after crises emerge.

This requires a new social contract. The research community must prioritize investigations into AI supply chain security and explainable AI. Industry consortia continue to spearhead the development of globally recognized frameworks for AI governance and risk management, making “Secure AI by Design” the non-negotiable baseline. Cybersecurity vendors must accelerate the creation of a new generation of AI-aware security tools. And, policymakers must craft agile, informed legislative frameworks that foster responsible innovation while establishing clear lines of accountability.

For business leaders and boards, the mandate is clear: Champion the necessary investments, foster a culture of AI security awareness and demand transparency from your vendors and internal teams. The stakes could not be higher, as agentic systems begin to manage critical operations in finance, healthcare, defense and infrastructure. The time to act is now, collectively and decisively, to ensure that the incredible potential of agentic AI serves to benefit, not to undermine, our shared future.

Let’s Deploy Bravely together.


1AI Cybersecurity Collaboration Playbook,” Cybersecurity and Infrastructure Security Agency, January 14, 2025.

STAY CONNECTED

Connect with our team today