Weaponized Intelligence

Weaponized Intelligence

By   |  5 min read  | 

AI is giving attackers their most powerful weapon. Now it has to become the defense.

The frontier model capabilities are no longer theoretical. Anthropic, OpenAI and others will soon be releasing models that are, by any honest measure, proficient at finding vulnerabilities at scale. These are not incremental improvements. Imagine a horde of agents methodically cataloging every weakness in your technology infrastructure, constantly.

Over the next six months, the barrier to entry for sophisticated attacks will continue to diminish. A hacker’s dream weapon will be available to anyone with a credit card and compute.

What makes this moment different is not just capability. It is the asymmetry, and for now, it favors the attacker. A single bad actor will now be able to run campaigns that once required entire teams. The models do not pause for sleep, they scale, and they only have to be right once. Defenders have to be right every time. That is not a fair fight. 

The vulnerabilities being targeted are not hard to find. The average company relies on thousands of tech vendors and millions of open-source dependencies with years of accumulated exposure: configuration errors, overlooked API endpoints, access policies that once made sense and were never revisited. It is old chaos that has never been fully remediated, and the new models are very good at finding it.

And it compounds. Employees are testing agents without fully understanding the exposure they are creating. Vibe coding has made software creation accessible to people who were never trained to think about what it opens up. Every desktop now effectively behaves like a server and is likely to have unsupervised AI tools operating near sensitive systems. The attack surface keeps growing, mostly unnoticed.

The reckoning will arrive sooner than most leaders expect.

The fastest AI-assisted attacks are already moving from access to exfiltration in 25 minutes, while the average enterprise still takes days to detect an intrusion. These numbers were already uncomfortable. The new frontier models will make them untenable. No company is immune. Not even the AI data centers powering these models, which run on the same enterprise IT they are capable of exposing. The model is not the solution yet, it will exacerbate the problem. 

The question I hear most often is: Now what?

The role of the cybersecurity industry: The models creating this problem will play a role in the defense, fighting AI with AI

The key lies in the fact that the same models that find and exploit vulnerabilities can also be part of the defense, but only if they are quickly integrated into defensive solutions. The advantage for defenders is the ability to deploy these models to swiftly identify, validate, and patch the same vulnerabilities they uncover, essentially in real-time. Attackers have access to this technology, and so do we. The strategy is clear: we must fight AI with AI.

It’s important to remember that these models aren’t effective as comprehensive defense systems. They never will be. They are powerful, but they need scaffolding that we’ve spent years building. The scaffolding for cybersecurity includes:

  • Sensors: Sensors across the network, cloud, endpoints and browsers, that both collate data, and stop known threats before they go further. Years of technology has been built to protect the edge, and it will need to be supplemented with AI. Models cannot fix what the sensors cannot see. The edge instrumentation is not optional, it is the first condition.
  • AI-enabled data lakes: Sensors alone generate noise. What converts noise into actionable intelligence is context, and context requires a rich security-specific data lake built to receive, normalize, and retain signal. Logical data lakes are rich enough to give AI the context to convert a suspicious signal into a confirmed signal and an actual response. The data lake is not storage, it allows models to analyze data on the fly, in combination with years of machine learning algorithms that have been created by the industry to anticipate edge cases and known techniques. This combination is hard to replicate, and harder to attack.
  • Build the foundation, break the silos: There has never been a more important time to reduce fragmentation for the cybersecurity stack. Research shows that in 75% of breaches, logging existed that should have flagged anomalous behavior, but critical signals were buried and never actioned before it was too late. The signals were there, but they were buried across fragmented tools. The data needs to be in the same place, the modern tools need to be self healing. That gap was manageable when attacks moved at human speed. At the speed AI enables, it will become untenable. Consolidation is not a modernization preference. It is a prerequisite. 

The solution is not to pit the LLMs against cybersecurity, it’s to work together. 

  • AI labs need to release these capabilities in a responsible fashion, whilst ensuring the defenders and national guardians of security have been consulted. 
  • New capabilities around cybersecurity and agentic workflows should be secured by design, not launched by AI companies with no regard to security. 
  • Defenders need to be able to leverage these capabilities swiftly to ensure that we are able to fight AI with AI. 

The stakes are high. The window to act is open, and we need to act swiftly with intent, together. Every security leader, every board, every AI company needs to treat this with the urgency it demands. 

This is the cybersecurity industry’s most consequential moment.

Get the foundation right, and AI becomes the defender. Get it wrong, and no model in the world will save you. 

Our work is well underway. Across the industry with the AI labs, with technology vendors, with our partners, with our customers, we are building the foundation that makes defense possible. The AI labs have a role to play and so do all of you.

Cybersecurity resources just got dearer.

STAY CONNECTED

Connect with our team today