A physician reviews a patient’s chart during an examination, unaware that a medical transcription agent is simultaneously updating electronic health records, suggesting prescriptions, and retrieving prior test results in real time. Meanwhile, on a factory floor, a computer vision agent scans components at speeds no human inspector could match. Both scenarios highlight a critical challenge: these AI agents operate as non-human entities, yet most enterprise identity and access management (IAM) systems remain designed for human users alone.
This structural mismatch is why 80% of AI agent projects remain stuck in pilot phases, according to Cisco President Jeetu Patel. While 85% of enterprises are running agent pilots, only 5% have successfully deployed them in production—a gap that underscores a fundamental trust issue. For security leaders, the core questions are immediate and urgent: Which agents possess access to sensitive systems? And who bears responsibility when one of these agents acts outside its intended scope?
The identity crisis triggered by autonomous agents
The challenge isn’t merely technical—it’s architectural. A 2026 report from IANS Research reveals that most businesses still lack robust role-based access controls (RBAC) even for human identities, let alone machine-driven entities. The problem compounds when considering that 44% of attacks in the IBM X-Force Threat Intelligence Index stemmed from vulnerabilities in public-facing applications, often due to missing authentication controls and AI-assisted exploitation methods.
Michael Dickman, SVP and GM of Cisco’s Campus Networking business, argues that the current IAM crisis isn’t a tooling issue but a foundational one. With experience leading product teams at Gigamon and Aruba Networks, Dickman emphasizes that traditional telemetry sources—security logs, application performance monitors, or endpoint detection tools—often miss critical data. "The network sees what others can’t," he explains. "It captures actual system-to-system communications, not assumptions about which systems should interact. That raw behavioral data is essential for enforcing policies at the speed agents operate."
Trust must come first, not as an afterthought
Dickman challenges the conventional wisdom that productivity should precede security. "Trust isn’t a secondary consideration," he states. "It’s a prerequisite. When agents autonomously update patient records, modify network configurations, or process financial transactions, the consequences of a compromised identity expand exponentially."
He outlines four foundational conditions for building trust in agentic AI:
- Secure delegation: Clearly define agent permissions and maintain a verifiable chain of accountability. Each agent’s scope must be explicitly documented, and actions should trace back to a human owner.
- Cultural readiness: Traditional alert fatigue strategies—like aggregating alerts to reduce noise—fail when agents can evaluate every alert autonomously. This shift requires rethinking workflows and team structures to integrate AI-driven decision-making.
- Token economics: Every agent action incurs computational costs. Hybrid architectures, where agents handle reasoning and deterministic systems execute actions, can balance efficiency with control.
- Human judgment: AI excels at data processing but struggles with nuance. Dickman’s team used an AI tool to draft a product requirements document, which produced 60 pages of repetitive filler—demonstrating the irreplaceable role of human oversight in refining outputs.
Why network-level visibility is non-negotiable
Enterprise data is often siloed across observability platforms, security stacks, and application tools, leaving security teams with fragmented insights. Dickman warns that this fragmentation becomes perilous as AI agents proliferate in physical environments—from retail analytics to factory-floor inspections.
"The network provides the only view that captures actual communications," he says. "Not hypothetical connections, but real interactions. That’s the data needed to enforce agent policies at machine speed."
As IoT devices and physical AI systems generate increasingly sensitive data—tracking shopper behavior or monitoring production lines—the demand for precise access controls intensifies. Without trustworthy identity governance, the potential for misuse or breach grows, turning what should be efficiency gains into liability risks.
The path forward requires rearchitecting IAM systems to account for non-human identities. Enterprises that delay this transition risk not only stalled AI deployments but also expanded attack surfaces. The question isn’t whether agents will transform operations—it’s whether security frameworks will evolve fast enough to keep pace.
AI summary
AI ajentleri, hastane kayıtlarını ve fabrika denetimlerini yönetiyorlar, ancak şirketlerin bunların güvenliğini sağlama altyapısı yok. Güven, iş verimliliğinden önce gelir ve temel bir gereksinimdir.
