Contact Us

Role of AI in Incident Response: Hype vs. Value (Part 2)

CISO Global AI Division

AI also struggles with cost vs. benefit trade-offs. It can flag a spike in activity, but it can’t tell you whether it’s worth taking a system offline in the middle of payroll processing.

In the first part on this topic, we looked at where AI genuinely helps in incident response and where human judgment remains essential. But we left some important ground uncovered, specifically, the edges of AI’s usefulness and the blind spots that can turn into real problems if ignored.

Let’s start with what AI can do that often gets overlooked.

Real-time monitoring and alerting is one of its strengths. AI-driven systems can ingest data streams from across your environment and flag activity the moment it deviates from established baselines. This can mean spotting a surge in failed logins, an unexpected outbound connection, or unusual file access patterns often in seconds.

AI also shines when it’s fed custom threat intelligence. If your organization has invested in building its own threat trends, indicators of compromise (IOCs), known attack vector paths, or even covert mapping of stealth behaviors, AI can be tuned to match against them at speed and scale. The result is faster, more targeted detection that reflects your specific threat landscape.

Some organizations are even experimenting with partial incident containment. For example, an AI-driven SOAR playbook might automatically disable a compromised account or block a suspicious IP before a human even sees the alert. These steps can buy precious minutes, though they work best for clearly defined, low-risk containment actions.

And while most current deployments focus on reacting to events, predictive analysis is emerging. By analyzing historical patterns and correlating them with ongoing telemetry, AI can sometimes highlight conditions that precede an incident, giving your team a chance to act before damage occurs.

That’s the good news. Now let’s revisit the limitations, because they matter just as much.

First, novel threats remain a problem. Most AI detection models are built on historical data. If the attack doesn’t match something in that data, whether it’s a brand-new exploit or an unconventional sequence of events, there’s a real risk it goes unnoticed.

Similarly, out-of-the-box attacks that don’t follow established playbooks can confuse the model. Creative, one-off intrusion methods can slip past because the AI has no reference point to compare against.

Then there’s adversarial attacks, a more deliberate weakness. These are techniques specifically designed to trick AI models. In computer vision, it might be a few pieces of tape on a road fooling an autonomous car. In cybersecurity, it can be obfuscating malware in a way that looks normal to the model, or flooding logs with benign noise to hide malicious activity. The more attackers study how AI works, the more they’ll find ways to exploit its blind spots.

AI also struggles with cost vs. benefit trade-offs. It can flag a spike in activity, but it can’t tell you whether it’s worth taking a system offline in the middle of payroll processing. That decision requires human understanding of both operational impact and business risk.

And while we know AI lacks contextual understanding, it’s worth calling out how this shows up in practice. Without that context, AI can misclassify benign actions as malicious, or worse, treat dangerous activity as normal. Multistage attacks, especially those mixing technical and social engineering components, can slip past because AI doesn’t grasp why certain actions matter in sequence.

Speaking of social engineering, AI is not good at spotting attacks that target people directly. It can process phishing indicators, but it doesn’t understand trust, manipulation, or human behavior in a way that would let it reliably detect scams designed to exploit relationships.

The takeaway? AI brings speed, scale, and consistency to incident response, but it also has boundaries. If you know where those are, you can design your program so AI amplifies your team without replacing essential human oversight.

In Part 1, we said the smartest way to use AI is as a force multiplier. That’s still true. But in Part 2, we see that multiplying the wrong thing, noise, bias, or blind spots, can be just as dangerous as missing an alert entirely. The real skill is in pairing AI’s strengths with human judgment, operational context, and a healthy dose of skepticism.