How AI Increases the Load on Security Teams

And why automation may be the only way to keep up

  • AI is speeding up development and vulnerability discovery, making it so security teams handle far more findings and fixes than before. 
  • While AI can help security teams scale tasks like code reviews, it only works when it’s carefully tuned to match real risk. 
  • As code, threats, and alerts all move faster, automation (not manual processes) is the only way security teams will be able to catch up.

When an AI agent—acting as a developer in my IDE—told me that another AI agent—acting as a code reviewer in the pipeline—incorrectly flagged a security vulnerability in its code, I felt a sense of déjà vu. Fundamental aspects of development, like disputes between coders and reviewers, haven't changed. Both AI agents were using the same popular model yet they disagreed just like any two human developers. And I had to step in and arbitrate. 

The more I thought about it, it dawned on me that this was about to happen on a much faster, larger scale in novel ways.

Maintainers and the case of keeping up

A few weeks ago, I read an article in The Register about how open source maintainers suddenly became more busy with AI generated bug reports. Before, the bug reports they received were mostly AI slop, which increased the load, but the slop could be easily discarded. At some point in the past months, the quality of the reports improved—and suddenly, they had to implement fixes. Projects with large teams of maintainers can still handle the load, but smaller projects are getting overloaded.

What does that mean for security teams? Possibly safer open source in the long run, but only if maintainers can keep up. More valid security bug reports means more vulnerabilities that need to be patched, and attackers may actively exploit some while maintainers work through all the backlogs. Attackers, after all, are also using AI to find and weaponize these flaws.

1 : 10 : 100 

We used to say that for each cloud product, you have one security professional to 10 DevOps to 100 developers. AI changed that ratio. Each developer now has a team of AI agents working for them. Yes, development teams are becoming smaller and more efficient, but that alone won’t alleviate security load. Code churns out at a much higher speed—and it’s hardly free of flaws while still leaning on many understaffed open source projects. 

Judging AI security findings

One thing that was fascinating about my experience with the AI code reviewer was where the bug claim was: in an .mdc file. For anyone unfamiliar, an .mdc is an AI command file written in Markdown (a text format used for documentation) such as the familiar README.md files on Github.

My developer AI was writing instructions for other developer AIs inside that .mdc file. The instructions included the execution of a script to collect information and another script to create reports in html. Meanwhile, the reviewer AI was concerned that other AIs who were reading the instructions were not instructed to do input validation—this could lead to Shell Command Injection in the collection script and Cross-Site Scripting in the html generator. 

The developer AI argued that all security measures were in place in the supporting scripts. This was the second round of code review and the first time the reviewer’s suggestions had been accepted. As a result, the process invocation and html generation were correctly hardened and the developer AI felt that their fix was sufficient.

Was it actually exploitable?

The fix requested by the AI reviewer included adding more instructions in textual format, in the MD, like: “ensure the input does not include special characters.”

As a big supporter of input validation as an attack surface reduction strategy, I agreed with the suggestion. However, had this been presented to me as a security vulnerability once the code was published, I would have rejected it on the basis that this issue was not exploitable. 

Even though I agreed with the best practice—especially since it was easy enough to implement before merging the code—there were already security measures in place in areas of risk. Input validation was just an additional safeguard.

This may be controversial to some security practitioners, but I think going through all the trouble of publishing, monitoring, and patching a Common Vulnerability Exposure (CVE) should be justified. 

Keeping up with vulnerability reports

The CVE system, used to track vulnerabilities in software, is often understaffed and overwhelmed by a large number of CVE IDs (a unique identifier assigned to a specific vulnerability).

The number of CVEs published each year has more than doubled over the past five years.

Source: https://www.cve.org/about/Metrics
Source: https://www.cve.org/about/Metrics

As of March 2026, the National Vulnerability Database (NVD) has received roughly 15,000 new CVEs this year. Struggling to keep up with the influx, only about 11,000 have been analyzed. At this rate, by the end of 2026, we’ll hit 60,000—triple the previous amount from 2021.

And it’s not just NVD under strain. Security and development teams everywhere are struggling to keep up with patching. It stands to reason that as an industry we should be frugal with what we consider a CVE and not squander already stretched resources on hypothetical or impractical bugs.

AI may be helping with patching, but it’s also introducing new CVEs. The Georgia Tech Vibe Security Radar has been tracking vulnerabilities introduced by AI Tools and that number is rapidly increasing. 

Vulnerabilities by Month
Vulnerabilities by Month

AI-specific security flaws

Beyond known security weaknesses, AI introduces an entirely new class of attack vectors.

For the past two decades, The Open Web Application Security Project (OWASP) has published its OWASP Top 10, tracking the most critical security risks in web application, API and mobile threats. 

Now, there are more lists to keep track of: the 2025 OWASP Top 10 for Large Language Models and the 2026 OWASP Top 10 for Agentic Applications. These highlight threats unique to AI systems, like prompt injection, an attack where attackers hide malicious instructions in data processed by AI.

Dealing with the disruption

So what can we do about this new, rapidly evolving threat vulnerability storm? 

Security teams have to radically change how they operate. Defense needs to happen at scale, with AI augmenting security workflows and integrating across areas security has struggled with before. 

Even if sometimes we have to intervene in arguments between AI agents, they still provide powerful support for targeted code reviews—provided we understand how to tune them to align with our risk appetite. 

The costs of AI code review at scale

AI is not the answer to everything. If you throw your entire code base at AI—let’s say 100 repos and—100 million lines of code—you will quickly run into a ton of practical problems. Token costs will add up real fast, breaking the bank. But just as importantly, you will create a lot of noise and disruption for development teams and negatively impact product stability. 

Now run this process on every nightly build and you will soon be out of business.

Security teams need to see the bigger picture and identify where AI fits and where good ol’ traditional automation is still the better and cheaper option. There is a good reason why we supplemented human code reviewers with automated scanners. The same principle applies here. AI code reviewers are valuable, but they work best when used strategically and supported by strong automation that filters, prioritizes, and operationalizes the results. 

The real challenge is velocity

AI-assisted development is increasing the speed at which code is written, vulnerabilities are discovered, and alerts are generated. Attackers are also moving faster, leveraging AI to identify and exploit weaknesses left and right at machine speeds. 

In other words, risk, threat, and attack velocity are all increasing at the same time. The only way to safely channel this incoming flood is to automate as much as we can. It will be cheaper and more scalable having AI write repeatable algorithmic automation, than having AI perform the analysis every single time.

What to do about vulnerability exploitation

We must accept that in spite of all our best efforts we won’t be able to prevent all security bugs or patch all of them in time and some of them will end up being exploited. Companies must invest in scalable threat detection to stop the bleeding fast in case of an attack and allow response teams to focus on the alerts that matter.

Security is needed now more than ever

All in all, security investment must increase to deal with the increased load. 
When securing AI-driven loads at scale, teams should:

  • Leverage AI early in development to find security bugs and speed patching and development without creating risks.
  • Proactively counter novel threats targeting AI tools.
  • Automate processes wherever possible to keep pace with the influx of code and bugs.
  • Scale up threat detection and response capabilities, by using best of breed security tools.

Security teams have seen disruptive shifts before, but the pace of change itself is increasing. It is an exciting time. But then again, it’s not like we’re not used to excitement.

You might also enjoy

Explore Upcoming Events

Find experts in the wild

See what's next