Your Detections Stopped Working Months Ago. You Just Don't Know It Yet.
Here's a number that should keep every security leader up at night: 13% of SIEM rules in production environments are completely broken and will never fire an alert.
That's from CardinalOps' 2025 State of SIEM Detection Risk report, analyzing 13,000+ detection rules across hundreds of enterprise environments. These aren't legacy shops—these are modern security programs with significant investments in detection capabilities.
The Coverage Illusion
The same research found enterprise SIEMs cover only 21% of MITRE ATT&CK techniques. When narrowed to the top 10 techniques observed in real attacks, organizations cover just 4 of them.
| What You Think You Have | What You Actually Have |
|---|---|
| Comprehensive ATT&CK coverage | 21% technique coverage |
| Working detection rules | 13% completely broken |
| Real-time threat visibility | 1 in 7 attacks detected |
Organizations are sitting on mountains of telemetry—averaging 259 log types from 24,000 sources—that could detect 90% of ATT&CK techniques. But manual, error-prone detection engineering prevents them from using it.
Why Detections Die
Detection rules don't fail because attackers discovered some brilliant technique. They fail because environments change and rules don't.
The Picus Blue Report 2025, based on 160 million attack simulations, found 50% of detection failures stem from log collection issues. Not sophisticated evasion—just logs that stopped flowing because someone changed a firewall or migrated to a new endpoint agent.
What actually kills detections:
- Infrastructure changes break field mappings. Swap out a firewall vendor, your
src_ipbecomessource_ip_address. Rule keeps running. Nothing fires. - Reference data goes stale. "Critical assets" and "privileged users" lists don't get updated. New domain controllers, new exec accounts—all invisible to your detections.
- Dependencies break silently. Correlation rules that chain detections fail when one link breaks. Enrichment sources stop updating. Downstream rules keep running but mean nothing.
The Human Cost
This isn't just technical. The SANS 2025 Survey found "very frequent" false positives jumped from 13% to 20% year-over-year. When analysts spend days chasing dead ends, they burn out. 70% of SOC analysts with five years or less experience leave within three years.
The Devo 2024 SOC Performance Report found 53% of alerts are false positives, and 70% of SOCs struggle with volume. One industrial control system study found a 99.7% false positive rate—27,000 alerts, 76 legitimate.
What Actually Works
Organizations getting this right treat detection engineering like software development:
- Version control everything. Rules in Git. Track changes. Enable rollbacks.
- Test continuously. Breach simulation tools validate whether rules actually fire.
- Build feedback loops. Every incident informs detection improvements.
- Measure rule health, not rule count. 200 working rules beats 1,000 broken ones.
The Bottom Line
Most detections don't fail because attackers got smarter. They fail because the system stopped evolving.
CardinalOps' five years of data is clear: organizations that don't adopt continuous assessment of detection health remain dangerously exposed—even with modern platforms and sophisticated telemetry.
Detection engineering isn't something you finish. It's something you maintain. Because somewhere in your SIEM right now, there's a rule supposed to catch the thing that will compromise you.
It probably stopped working six months ago.
Sources: CardinalOps 2025 SIEM Report, Picus Blue Report 2025, SANS 2025 Survey, Devo 2024 SOC Report