Anthropic is redefining how artificial intelligence agents evolve by introducing a capability called dreaming, a self-review mechanism that enables AI to learn from its own history. Unveiled at the Code with Claude developer conference in San Francisco, this innovation joins two other features—outcomes and multi-agent orchestration—that collectively address core challenges in deploying reliable AI agents for enterprise workloads.
AI agents gain the ability to self-correct and improve over time
The new dreaming feature operates as a scheduled background process that reviews an agent’s past sessions, memory stores, and workflow patterns to extract actionable insights. Unlike conventional memory systems that retain context within individual sessions, dreaming synthesizes information across multiple interactions to identify recurring mistakes, inefficient workflows, and shared preferences among teams of agents. This approach transforms isolated experiences into reusable knowledge without altering the underlying model’s weights.
Alex Albert, Anthropic’s Head of Research Product Management, compared dreaming to how professionals document their problem-solving processes in organizations. "When someone works with an AI agent and iterates through a task, they often want to document the path from start to finish," Albert explained. "Dreaming automates this reflection by letting the model record its own learnings as plain-text notes or structured playbooks that future sessions can reference."
Enterprise users report dramatic efficiency gains
Early adopters are already seeing significant improvements in agent performance. Legal AI company Harvey achieved a sixfold increase in task completion rates after integrating dreaming into their workflows. Medical document review platform Wisedocs reduced review time by 50% using the outcomes feature, which formalizes success criteria for agent tasks. Meanwhile, Netflix is processing hundreds of build logs simultaneously through multi-agent orchestration, a system that coordinates specialized agents working in parallel.
These results align with Anthropic’s broader growth trajectory. During a fireside chat at the conference, CEO Dario Amodei revealed that the company’s revenue and usage have grown 80x year-over-year in Q1 2026—far surpassing the 10x annual growth the team had anticipated. API volume on the Claude platform has surged nearly 70x, and developers now spend an average of 20 hours per week interacting with Claude Code.
"We planned meticulously for a 10x growth scenario, but reality delivered 80x," Amodei said. "This explosive demand has strained our compute resources and highlighted the urgency for scalable agent infrastructure."
A live demonstration proves the concept in real time
The conference featured a live demonstration where Anthropic’s team configured a multi-agent system for a fictional aerospace startup called Lumara. The system consisted of three specialized agents: a commander overseeing mission success, a detector identifying optimal landing sites, and a navigator managing drone flight and safe landing procedures. A predefined rubric required soft landings, clear ground conditions, and sufficient fuel reserves for a return trip.
During the initial simulation across six hypothetical landing sites, the agents produced strong but imperfect results. The team then triggered a dreaming session from the Claude Developer Console. Overnight, the system reviewed all past simulations and generated a detailed descent playbook—a set of heuristics derived from patterns across multiple mission runs. When the simulation ran again the following morning with the new playbook in memory, the agents showed measurable improvements on previously underperforming sites.
Angela Jiang, Anthropic’s Head of Developer Experience, noted the minimal human intervention required: "All we had to do was press a button to activate dreaming, and the system autonomously refined its approach."
The future of self-improving AI systems
These advancements mark a critical step toward building AI agents capable of continuous self-improvement—a requirement for enterprises hesitant to trust autonomous systems with production workloads. By enabling agents to learn from their own history, Anthropic is addressing the core challenges of scalability, reliability, and trust in AI-driven workflows.
As organizations increasingly rely on AI agents for complex tasks, the ability to audit and verify an agent’s self-generated insights will become paramount. Anthropic’s approach ensures these improvements remain transparent and inspectable, striking a balance between automation and human oversight. The company’s rapid growth underscores the urgency of these innovations, as demand for scalable, self-correcting AI systems continues to outpace expectations.
AI summary
Anthropic, Claude Managed Agents platformunda dreaming adlı yeni bir özelliği tanıttı. Bu özellik, AI ajanlarının geçmiş deneyimlerinden öğrenerek kendilerini geliştirmelerine olanak tanır.
