Agentic AI is rapidly expanding into security operations (SecOps), taking on repetitive, lower-value tasks such as alert triage, phishing analysis, and malware reverse-engineering. Rather than replacing analysts, these systems aim to ease workloads, reduce alert fatigue, and accelerate investigations.
Microsoft’s Project Ire is a notable example. The agent autonomously reverse-engineers suspicious software and recently authored the first AI-based “conviction” strong enough for Windows Defender to block an advanced persistent threat (APT). In testing, Ire achieved 0.98 precision and 0.83 recall on Windows driver datasets, and 89% precision on Defender telemetry, though recall dropped to 25%. This makes it well-suited for triage, where false positive reduction is critical. Microsoft also previewed a Phishing Triage Agent that processes user-reported emails and generates natural-language rationales for security teams.
Other vendors are following suit:
- CrowdStrike’s Charlotte AI integrates into the Falcon platform to provide automated triage with contextual explanations.
- ReliaQuest’s GreyMatter leverages agentic AI for detection, investigation, and response across multiple tools.
- Google’s Big Sleep agent uncovered a major SQLite vulnerability (CVE-2025-6965), while its Sec-Gemini model improves forensic workflows.
A key design feature across these systems is transparency. Instead of issuing binary verdicts, agentic AI produces evidence chains—structured reports, summaries, and rationales that analysts can review. This supports oversight while ensuring human experts retain responsibility for high-risk decisions.
Adoption is accelerating. A July ISC² survey found 30% of security teams already use agentic AI, with another 42% evaluating integration. Reporting in Forbes and Axios suggests enterprises are prioritizing these tools to manage alert volume amid chronic analyst shortages.
However, risks remain. Analysts warn of model hallucinations, limited reasoning, and low recall in live pipelines. Without oversight, high-precision systems may create blind spots.
Overall, agentic AI is emerging as a standard augmentation layer in SecOps—scaling analysis, streamlining workflows, and boosting consistency, while keeping critical judgment with humans.
Source:

