Over the past year, CISOs have wrestled with integrating AI into security processes, balancing the promised efficiencies with the need for stringent testing and adherence to security protocols. Hot on the heels of this integration effort comes another innovation: agentic AI. An evolution that marks the next leap in AI maturity, agentic AI offers both opportunities and challenges that could redefine how CISOs, CIOs, and their teams defend against cyber threats.
When I submitted a panel talk to RSA on CISOs’ perspectives on using AI in their organizations and finding AI’s promised productivity gains for their companies this past September, the term Agentic AI was just coming into play. In a few short months, the technology is already advancing, with significant developments in AI autonomy. Today, agentic AI is becoming a part of the strategic roadmap discussion for CIOs and CISOs.
A year ago, the buzz was all about security copilots—AI-powered assistants designed to enhance threat detection, streamline incident response, and optimize security workflows. These copilots brought automation into enterprise security processes and strategies, transforming how organizations respond to cyber risks. Now, agentic AI has stepped into the spotlight.
More autonomous and adaptive than its predecessors, this next-gen approach can take on more complex security tasks, anticipate emerging threats, and dynamically adjust defenses in real-time. This class of advanced AI systems is designed to operate autonomously, making decisions and taking actions to reach specific goals with little to no human monitoring. The big difference is that agentic AI uses advanced reasoning, adaptability, and learning capabilities to independently navigate complex tasks rather than relying on existing AI’s human approval and guidance to make decisions.
It’s an astonishing step ahead, combining the power of large language models (LLMs) and real-time data processing to act as a proactive “agent” in dynamic environments without human intervention. But questions linger. Will the AI take over entire processes? And if so, could the lack of a human in the loop cause unexpected issues? For example, might an agentic AI stop or block a legitimate business transaction because the agent thinks it’s fraud? Alternatively, could the agent accidentally create a vulnerability that can be exploited? For CISOs, agentic AI represents both a transformative opportunity and a strategic shift.
As cyber threats grow in speed and sophistication, CISOs are pressured to maintain or boost their organizational resilience while managing resource constraints and/or worker burnout. That’s where agentic AI can make its mark – stepping in as a force multiplier, automating decision-making, adapting to evolving threats, and enabling CISOs to evolve from reactive defenders to architects of business-aligned security strategies. But it’s not without challenges.
CISOs must grapple with governance policies, along with reliability and compliance issues. How can humans oversee the process without actually overseeing the process? In this scenario, AI decision-making may also erode trust if not addressed with transparent frameworks and human oversight. Crowdstrike and NVIDIA recently to work together to advance these areas.
The companies aim “to deliver even more intelligent automation, enable real-time decision-making, and enhance precision to help security teams respond to threats with unprecedented speed and accuracy.” The companies hope that their use of agentic AI security automation will enable faster and more accurate threat detection, more autonomous SOC operations with optimized response times and reduced operational costs, and allow enterprise security teams to get more precise data regarding real threats. Despite having both incumbent security brands and a large number of newer entrants pushing the agentic AI concept, CISOs are moving forward at a cautious pace.
The most cutting-edge and AI-savvy security executive that I regularly speak with stated, “Currently, agentic AI is more hype than reality, but it has massive future potential.” The testing that this executive and his team have done has shown that agentic systems are still not outperforming traditional solutions that incorporate human input and direction in any meaningful way. Another security executive pointed to the promise of cutting down on the noise level in the SOC, but had no near-term plans to deploy agentic solutions.
Others mentioned that they see nearer-term applicability to basic to moderate complexity work in their organization (including use cases outside of their security domain), but not suitable for high-value and high-complexity tasks. Combining this sentiment with regulatory questions drove feedback that organizations are not moving to a fully automated defensive environment anytime soon. But as we’ve all seen, the AI markets move faster than prior waves of innovation.
Given the comments above, let’s delve a bit deeper into one of the nearer-term use cases. Agentic AI solutions that help automate detection and remediation in the SOC (security operations center) could certainly provide a welcome lift to an under-resourced department receiving thousands of daily threat alerts. Automated agentic AI solutions could assist SOC analysts in the following ways: For CISOs, deploying agentic AI could result in a leaner, more effective SOC that counters automated attacks with equal speed.
Another use case for agentic AI would be to evolve the enterprise help desk. Because the help desk is critical for both IT and security, it’s often weighed down by repetitive tickets and user requests. This causes delays and strains resources.
Agentic AI could improve this area by: Having an automated assist at the help desk could be a boon of support for weary CIOs and CISOs. Agentic AI could handle minor issues before they grow into serious breaches. Despite all of these potential advantages, agentic AI is still in its deployment infancy.
It is an area of high promise, but not without its risks and concerns. It will be interesting to see where the market adoption sits at this time next year. As I said at the beginning of this, a year ago, we were just talking about co-pilots, and agentic AI was not yet part of the discussion.
.
From Copilot to agent – AI is growing up, and CISOs need to be ready

Over the past year, CISOs have wrestled with integrating AI into security processes, balancing the promised efficiencies with the need for stringent testing and adherence to security protocols. Hot on the heels of this integration effort comes another innovation: agentic AI. An evolution that marks the next leap in AI maturity, agentic AI offers both opportunities and challenges that could redefine how CISOs, CIOs, and their teams defend against cyber threats.When I submitted a panel talk to RSA on CISOs’ perspectives on using AI in their organizations and finding AI’s promised productivity gains for their companies this past September, the term Agentic AI was just coming into play. In a few short months, the technology is already advancing, with significant developments in AI autonomy. Today, agentic AI is becoming a part of the strategic roadmap discussion for CIOs and CISOs.From security Copilots to agentic AIA year ago, the buzz was all about security copilots—AI-powered assistants designed to enhance threat detection, streamline incident response, and optimize security workflows. These copilots brought automation into enterprise security processes and strategies, transforming how organizations respond to cyber risks.Now, agentic AI has stepped into the spotlight. More autonomous and adaptive than its predecessors, this next-gen approach can take on more complex security tasks, anticipate emerging threats, and dynamically adjust defenses in real-time. This class of advanced AI systems is designed to operate autonomously, making decisions and taking actions to reach specific goals with little to no human monitoring. The big difference is that agentic AI uses advanced reasoning, adaptability, and learning capabilities to independently navigate complex tasks rather than relying on existing AI’s human approval and guidance to make decisions.It’s an astonishing step ahead, combining the power of large language models (LLMs) and real-time data processing to act as a proactive “agent” in dynamic environments without human intervention. But questions linger. Will the AI take over entire processes? And if so, could the lack of a human in the loop cause unexpected issues? For example, might an agentic AI stop or block a legitimate business transaction because the agent thinks it’s fraud? Alternatively, could the agent accidentally create a vulnerability that can be exploited?Understanding the potential of agentic AIFor CISOs, agentic AI represents both a transformative opportunity and a strategic shift. As cyber threats grow in speed and sophistication, CISOs are pressured to maintain or boost their organizational resilience while managing resource constraints and/or worker burnout. That’s where agentic AI can make its mark – stepping in as a force multiplier, automating decision-making, adapting to evolving threats, and enabling CISOs to evolve from reactive defenders to architects of business-aligned security strategies.But it’s not without challenges. CISOs must grapple with governance policies, along with reliability and compliance issues. How can humans oversee the process without actually overseeing the process? In this scenario, AI decision-making may also erode trust if not addressed with transparent frameworks and human oversight.Crowdstrike and NVIDIA recently announced a collaborative partnership to work together to advance these areas. The companies aim “to deliver even more intelligent automation, enable real-time decision-making, and enhance precision to help security teams respond to threats with unprecedented speed and accuracy.”The companies hope that their use of agentic AI security automation will enable faster and more accurate threat detection, more autonomous SOC operations with optimized response times and reduced operational costs, and allow enterprise security teams to get more precise data regarding real threats.Despite having both incumbent security brands and a large number of newer entrants pushing the agentic AI concept, CISOs are moving forward at a cautious pace. The most cutting-edge and AI-savvy security executive that I regularly speak with stated, “Currently, agentic AI is more hype than reality, but it has massive future potential.” The testing that this executive and his team have done has shown that agentic systems are still not outperforming traditional solutions that incorporate human input and direction in any meaningful way. Another security executive pointed to the promise of cutting down on the noise level in the SOC, but had no near-term plans to deploy agentic solutions. Others mentioned that they see nearer-term applicability to basic to moderate complexity work in their organization (including use cases outside of their security domain), but not suitable for high-value and high-complexity tasks. Combining this sentiment with regulatory questions drove feedback that organizations are not moving to a fully automated defensive environment anytime soon. But as we’ve all seen, the AI markets move faster than prior waves of innovation.Improving SOC operationsGiven the comments above, let’s delve a bit deeper into one of the nearer-term use cases. Agentic AI solutions that help automate detection and remediation in the SOC (security operations center) could certainly provide a welcome lift to an under-resourced department receiving thousands of daily threat alerts. Automated agentic AI solutions could assist SOC analysts in the following ways:Automating Alerts: Instead of analysts sifting through thousands of alerts, agentic AI could evaluate each one, correlating data from multiple sources and delivering prioritized, fully investigated reports. This could cut work times from days to minutes.Threat Detection: Agentic AI could analyze threat intelligence at scale, distinguishing real threats from false positives with contextual insights.Scaling Efficiency: Agentic AI could automate log analysis and incident correlation. This would allow SOCs to manage higher workloads without adding headcount. Gartner predicts a 40% efficiency boost by 2026 by shifting analyst roles toward AI oversight and strategic initiatives.Faster Response: Agentic AI could autonomously flag suspicious network behavior and isolate compromised devices in real-time. This would help to contain threats faster.For CISOs, deploying agentic AI could result in a leaner, more effective SOC that counters automated attacks with equal speed.Automating help desk operationsAnother use case for agentic AI would be to evolve the enterprise help desk. Because the help desk is critical for both IT and security, it’s often weighed down by repetitive tickets and user requests. This causes delays and strains resources. Agentic AI could improve this area by:Automating Repetitive Tasks: Agentic AI resolves routine issues like password resets and software troubleshooting without human intervention. This reduces ticket volume and wait times and frees company help desk staff for other projects.Delivering User Experience: Agentic AI offers personalized 24/7 support, offering tailored solutions for common user needs.Flagging Alerts for Security: Agentic AI automatically flags phishing reports or device anomalies and escalates them to the SOC. This will boost collaboration between CIO and CISO teams on various security issues.Having an automated assist at the help desk could be a boon of support for weary CIOs and CISOs. Agentic AI could handle minor issues before they grow into serious breaches. Despite all of these potential advantages, agentic AI is still in its deployment infancy. It is an area of high promise, but not without its risks and concerns. It will be interesting to see where the market adoption sits at this time next year. As I said at the beginning of this, a year ago, we were just talking about co-pilots, and agentic AI was not yet part of the discussion.