New risks loom as agentic AI positions itself to reshape cybersecurity
When the global artificial intelligence (AI) market in cybersecurity crossed $24 billion in 2024, analysts predicted growth would only accelerate, eventually topping $146 billion by 2034. The key driver of that surge is agentic AI, the name given to AI-powered systems that can act with greater independence than traditional machine learning models.
A study published this year in Telecommunications Policy highlights the promise of these systems, as well as the new risks arriving with them. Researchers argue that agentic AI can ease the burden on overextended Security Operations Centers by detecting threats, analyzing behavior patterns, and even taking corrective actions without human approval. This autonomy makes these systems attractive, but it also expands their potential attack surface, creating fresh vulnerabilities.
Promoting assistants to directors of cybersecurity
AI agents already handle routine tasks like blocking spam and managing calendar reminders. What distinguishes the new wave of “agentic” models is their ability to combine long-term memory, planning, and decision-making, allowing them to learn continuously and adapt to different environments.
Unlike earlier generations, agentic AI can triage alerts, quarantine suspicious devices, and patch vulnerabilities almost instantly. Security companies are already embedding such features. CrowdStrike’s Charlotte AI, for example, promises more than 98 percent accuracy in detection triage, saving analysts more than 40 hours of manual work each week. Darktrace’s Enterprise Immune System continuously learns what “normal” traffic looks like inside a network, autonomously blocking unusual activity without waiting for human review.
Reliaquest, another vendor, claims its GreyMatter platform can reduce containment times to under five minutes. In theory, this could level the playing field against attackers who have managed to decrease their average “breakout time” from initial access to lateral movement to well under an hour, according to vendor sources.
A human shortage gives agentic AI the chance to shine
The rise of agentic AI comes at a time of severe labor shortages in cybersecurity, with a global gap of nearly 4 million unfilled positions, according the World Economic Forum. In the United States, the Bureau of Labor Statistics reported a median annual wage of $124,910 for information security analysts in 2024, underscoring how costly and competitive talent has become.
Some companies are experimenting with “digital employees” to supplement staff. Twine, a Tel Aviv-based cybersecurity startup, has developed an AI agent named Alex, which specializes in identity and access management. Acting like a junior staffer, Alex identifies vulnerabilities and blocks unauthorized access, while offering full transparency for supervisors.
Other firms anticipate a proliferation of “virtual CISO” services, giving smaller organizations access to executive-level expertise on a flexible basis. That trend could help democratize cybersecurity, which has traditionally been resource-intensive and concentrated in large enterprises.
More autonomy, more adaptability, and more points of failure
The same qualities that make agentic AI attractive to defenders, autonomy, adaptability, and interconnectivity, also create new dangers. Traditional AI models operate within bounded inputs and outputs. Agentic systems, by contrast, interact with external tools, APIs, and databases, multiplying possible points of failure.
The U.S. National Institute of Standards and Technology (NIST) has long warned that cybersecurity should be approached as a process of risk management rather than absolute defense. Its Risk Management Framework emphasizes identifying threats and vulnerabilities as a continuous cycle. For agentic AI, that cycle becomes more urgent. A flaw in one agent can ripple across interconnected systems, magnifying consequences.
One particularly acute risk stems from employees introducing unauthorized tools into corporate environments. Without oversight, sensitive data can be exposed, or insecure code injected into production systems. Researchers caution that adversaries may exploit these gaps with data poisoning or prompt injection attacks, manipulating agents into approving fraudulent actions.
State of Cybersecurity: Healthcare’s Tactical Defense Summit – Healthcare is under siege. This isn’t theory. It’s frontline reality. Join us on Oct. 14 for a half-day virtual summit focused entirely on immediate, actionable strategies to defend against today’s most pressing cyber threats. Learn more & Register.
Cyber criminals are testing the limits of their newest recruit
Cybercriminals have already begun to probe the offensive potential of agentic AI. Self-improving phishing campaigns, automated vulnerability scans, and synthetic identity fraud have been identified as emerging tactics. A 2022 case documented by U.S. Immigration and Customs Enforcement showed how fraudsters blended fake and real data to open illicit accounts; researchers warn that agentic AI could help attackers scale up these schemes dramatically.
For now, though, attackers face limits. Many autonomous agents remain unreliable at carrying out multi-stage operations without supervision. But history suggests that once tools stabilize, adoption can be swift. Prior studies have shown that cybercrime organizations often mirror legitimate corporations in their ability to integrate new technologies and optimize business models.
Keeping an eye on the watchmen
Technical safeguards are only part of the solution. Governance frameworks are equally necessary. Multi-agent systems, in which networks of AI agents collaborate, could make oversight even more difficult. Without clear rules of engagement, organizations risk cascading failures if a single component is compromised.
Experts recommend restricting autonomy based on risk levels and ensuring audit trails for all agent actions. NIST and other standards bodies emphasize the principle of least privilege (PoLP), which limits access to only what is required, as especially critical to adhere to when delegating functions to AI.
Privacy laws add another layer of complexity. Agentic AI thrives on data, but regulations such as the European Union’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act (CCPA) set strict boundaries on how personal information can be processed. To reconcile these demands, companies are experimenting with privacy-enhancing technologies that allow AI to analyze sensitive data without exposing it directly.
Slow and steady is going to win this race, too
The study’s authors conclude that agentic AI represents both a transformative opportunity and a significant new vector of risk. Organizations that succeed will be those that embed oversight from the beginning by building governance, compliance, and monitoring into every deployment.
In practice, that may mean slowing down. Forrester has predicted that “three out of four firms that build aspirational agentic architectures on their own will fail,” often because firms underestimate the long-term costs of maintenance and oversight. Partnering with vendors, aligning with international standards, and training employees to recognize the limits of automation may prove as important as the algorithms themselves.
As the cybersecurity labor gap persists, pressure will grow to automate. But without careful planning, the tools designed to defend networks could just as easily become liabilities. The trajectory of agentic AI will depend not only on its technical sophistication but on whether organizations can ensure that autonomy serves security rather than undermining it.
Related Content – Investors flock to agentic AI as startups pitch fixes for cybersecurity gaps