Why the approaching flood of vulnerabilities changes everything — and what to do about it
AI-driven discovery, NIST’s retreat from universal enrichment, and the end of “good enough” vulnerability management
Key takeaways
- AI-driven discovery tools are accelerating CVE volume, resulting in an expected deluge of 59,000 disclosed vulnerabilities this year.
- NIST has dramatically scaled back enrichment of CVEs in the National Vulnerability Database (NVD), which will leave most new vulnerabilities without the critical metadata and severity scores required for traditional automated prioritization and patching.
- Tenable customers aren’t impacted by these developments because our exposure intelligence infrastructure hasn’t depended upon NVD enrichment for years.
Three things happened in April 2026 that, taken together, represent the most significant shift in the vulnerability management landscape in over a decade. If you’re still relying on hybrid methods — spreadsheets alongside scanners, manual triage stacked on top of automated feeds, hope as a strategy — these developments deserve your full attention.
The new vulnerability management landscape
On April 7, Anthropic launched Project Glasswing, a cybersecurity initiative built on its Claude Mythos frontier AI model. The premise: frontier AI can discover zero-day vulnerabilities in software at a speed and scale that human researchers cannot match. Glasswing’s coalition includes CrowdStrike, Palo Alto Networks, and Cisco, and its early results suggest the premise is credible.
On April 8, five Forrester analysts — including Jeff Pollard and Allie Mellen, two of the most-cited voices in security research — published their perspective in the blog “Project Glasswing Shows That AI Will Break The Vulnerability Management Playbook.” According to Forrester, “Discovery accelerates, but inventory lags behind reality.” Their conclusion was blunt: “The limiting factor in security is no longer the ability and knowledge to find problems — it’s the ability to absorb, prioritize, and act on them before adversaries do.”
Then, on April 15, the National Institute of Standards and Technology (NIST) announced a decision that many of us had seen coming: NIST will now only enrich Common Vulnerabilities and Exposures (CVEs) in three narrow categories: those in CISA’s Known Exploited Vulnerabilities (KEV) catalog, those affecting federal government software, and those affecting critical software under Executive Order 14028.
Everything else — the vast majority of the approximately 59,000 CVEs that the Forum of Incident Response and Security Teams (FIRST) projects for 2026 — will be listed but carry no actionable metadata from NIST. The reason? The National Vulnerability Database (NVD) — the public infrastructure that thousands of organizations depend on for severity scores, product mappings, and weakness classifications — can no longer keep up.
Let that sink in. A CVE without a Common Vulnerability Scoring System (CVSS) score, Common Platform Enumeration (CPE) mapping, or Common Weakness Enumeration (CWE) classification is, for most automated scanning and prioritization tools, functionally invisible. It exists in the database, but it tells you nothing about how severe it is, what products it affects, or what kind of flaw it represents.
The compounding problem
These three developments compound each other.
AI-driven discovery tools — NIST explicitly named Anthropic’s Claude Mythos and OpenAI’s GPT-5.4-Cyber — are accelerating the CVE pipeline. In Q1 2026, CVE submissions grew 33%, compared with Q1 2025. CVE submissions increased 263% between 2020 and 2025. NIST enriched a record 42,000 CVEs in 2025, 45% more than any prior year. It still wasn’t enough.
More CVEs are entering the pipeline at the exact moment the public infrastructure for making sense of them is contracting. The gap between what defenders need to know and what the freely available intelligence baseline can tell them is widening from both directions simultaneously.
Why “good enough” is over
Here’s where this gets personal for the security team running a hybrid vulnerability management program.
If your prioritization workflow starts with “check NVD for severity,” you now have a structural gap. Most new CVEs will arrive without NIST’s severity assessment. If you’re relying on CVSS scores from the NVD to decide what to patch first, a growing portion of your attack surface is becoming invisible to your process — because the metadata you depend on to see vulnerabilities isn’t coming.
If your team is already stretched — triaging manually, escalating by gut instinct, playing whack-a-mole with whatever landed in the inbox this morning — the math is about to get worse. The 48,448 CVEs published in 2025 are projected to grow to 59,000 in 2026, with realistic scenarios forecasting 70,000 to 100,000. And that’s not a one-year spike. FIRST’s three-year outlook projects sustained increases through 2028.
The organizations that have been managing this with hybrid methods — some automation here, some manual triage there, some spreadsheets holding it all together — are approaching a breaking point. The volume alone will overwhelm any process that doesn’t have an intelligence layer capable of filtering signal from noise at scale. Hope is not a strategy when the pipeline is accelerating and the public baseline is retreating.
What actually matters: the output
There’s an important distinction buried in the Glasswing and GPT-5.5 announcements that often gets lost in the headlines. AI-driven vulnerability discovery is an input. It finds flaws in code. That’s valuable work, and it’s going to accelerate. But finding a vulnerability is the beginning of the intelligence problem, not the end of it.
The question that defenders need answered at 2 a.m. is not: Does this software contain a flaw? The questions are: Is this flaw being actively exploited? By whom? Are they targeting my industry? Do I have exposed instances in my environment? What’s my patching SLA given the current threat tempo? Should I wake someone up, or can this wait until morning?
No code-scanning AI tool answers those questions. Not Claude Mythos. Not GPT-5.5. Not any model on the horizon. Those answers require a fundamentally different kind of intelligence — one built on threat actor attribution, campaign tracking, real-time exploitation evidence, and deep knowledge of your deployed environment.
This is the distinction that Tenable built its exposure intelligence capabilities around. Tenable goes beyond identifying what’s vulnerable. It tells you what’s being exploited, by whom, against which of your assets, and what you need to do about it before the window closes.
How Tenable fills the gap — with evidence
Tenable’s Research Special Operations (RSO) team and its exposure intelligence infrastructure have been operating independently of the NIST enrichment pipeline for years. That independence, which was a competitive differentiator a month ago, is now an operational necessity for defenders.
Here’s what that looks like in practice.
Prioritization that doesn’t depend on NVD. Tenable’s exposure intelligence is powered by machine learning trained on more than 1.7 trillion real-world security findings — the largest proprietary vulnerability intelligence data lake in the industry, growing by 113 billion new findings every month across nearly 50 million assets scanned daily.
VPR doesn’t need a CVSS score from NIST to tell you which vulnerabilities matter. It uses real-time exploitation telemetry and ground-truth data from actual environments. When the NVD can no longer tell you how severe a CVE is, VPR is unaffected — because it was never dependent on that pipeline in the first place.
Exploitation intelligence that exceeds the government baseline. As of April 27, 2026, Tenable’s RSO team has tagged 1,924 CVEs as actively exploited in the wild. That’s more than CISA’s KEV catalog, which contained 1,568 entries on the same day— the same KEV that NIST just designated as one of its three remaining enrichment priorities.
Of the CVEs that Tenable actively tracks, 355 have no KEV entry at all. For organizations that depend on CISA KEV as their signal for when to act, those 300-plus CVEs represent threats with no government early-warning signal. Tenable is the signal.
- An early-warning system that consistently leads government mandates. In 64% of cases where Tenable’s research team and CISA KEV overlap, Tenable established its watch designation first — with a median lead of seven days and an average lead of 37 days. In the most extreme case, Tenable flagged an actively exploited Citrix vulnerability 286 days before CISA added it to the KEV. That’s nine months of additional remediation runway for organizations acting on Tenable’s intelligence rather than waiting for a government mandate.
Threat actor attribution no scanning tool provides. As of April 27, 2026, Tenable’s research corpus includes 668 threat actor profiles — 562 advanced persistent threat (APT) groups and 122 ransomware groups — with more than 2,600 documented associations linking specific CVEs to specific threat actors.
For example, when a SharePoint zero-day surfaced in July 2025, Tenable's intelligence didn’t just tell you “this is critical.” It told you that Storm-2603 is deploying Warlock ransomware through it, that Kimsuky is conducting secondary exploitation, and which indicators of compromise you should be hunting for in your environment right now. That’s the difference between a severity score and finished intelligence.
A signal filter instead of an alert firehose. In a world of 59,000-plus CVEs per year, the last thing a security team needs is another feed that adds volume. Tenable’s research analysts have curated more than 11,000 CVEs and applied formal watches to only 368 of them — a 3.3% selectivity ratio.
That means 96.7% of the CVEs that Tenable’s team reviewed were assessed, classified, and determined not to warrant a formal watch. The discipline isn’t just in what Tenable escalates. It’s in what Tenable confidently tells you not to worry about. That’s the signal-to-noise ratio that scales to 59,000 — or 100,000 — CVEs a year.
- Scanning the AI attack surface itself. As AI-driven tools create new categories of exposure, Tenable has deployed 274 AI discovery plugins specifically designed to scan for and identify AI and large language model (LLM)-related vulnerabilities, misconfigurations, and security risks. In the last 30 days alone, those plugins returned more than 457 million findings across more than 7,300 customers in 57 countries. As AI accelerates vulnerability discovery, it’s also creating its own exposure surface. Tenable is already scanning for it.
The case for moving now
The convergence of these three forces — AI-accelerated discovery, NIST’s operational retreat, and the sustained volume increase — creates a moment where the cost of inaction compounds faster than it ever has.
Every month that your prioritization depends on a public baseline that is no longer reliably enriching most CVEs, you accumulate exposures you can’t see. Every quarter that your triage process depends on human analysts manually evaluating a pipeline that’s growing by double digits, you fall further behind. And every year that your exposure management program relies on methods designed for a world of 20,000 CVEs annually, you’re playing a game whose rules have fundamentally changed.
The value proposition has shifted. It’s no longer, “Tenable provides better intelligence than what you can get for free.” It’s increasingly that the free infrastructure can no longer reliably deliver the intelligence you need at all. The 29,000 CVEs that NIST moved to “Not Scheduled” on April 15 aren’t coming back. The enrichment gap is widening by design.
Far from this being a hopeless situation, it's clarifying: Don’t rely on luck. Rely on exposure intelligence.
AI-driven vulnerability discovery is going to make software safer over time. More flaws found means more flaws fixed. That’s a genuine positive for the ecosystem. Tenable’s own research team has already integrated AI-augmented analysis workflows — processing more than 126,000 LLM-curated threat intelligence articles alongside traditional analyst-led curation — to maintain the quality and speed of our intelligence production as the pipeline accelerates.
But here’s what acceleration demands:
- An intelligence layer that can separate the 3% that requires immediate action from the 97% that doesn’t.
- A prioritization engine built on real exploitation evidence from real environments. Severity scores from a pipeline that can no longer keep pace fall short.
- Threat actor context that, in addition to telling you that a vulnerability is exploitable, also tells you who is exploiting it.
- Continuous visibility across your actual attack surface. AI-speed disclosure cycles will outrun a static inventory in days.
The organizations that will weather this well are the ones that invest now in intelligence-driven exposure management. The window for hoping your current approach will hold is closing. The volume is coming. The public infrastructure is contracting. The adversaries aren’t waiting for you to modernize your triage process.
The intelligence, the platform, and the evidence base exist right now to stay ahead of this problem — if you make the decision to use them.
Learn more about how Tenable One Exposure Management Platform helps organizations prioritize what matters in a world of accelerating vulnerability discovery.
- Exposure Management
- Risk-based Vulnerability Management