In software development, speed has long been the ultimate measure of success. But when teams rely on AI agents to write code, the rules of the game change entirely. Boris Cherny, the engineer behind Anthropic’s Claude Code, highlights a critical insight: the most valuable work isn’t writing the code—it’s crafting the plan the AI follows. This isn’t just a minor tweak to workflows; it’s a fundamental shift in where leverage resides in software engineering today.
I lead a small development team that adopted AI-driven workflows from the start, and we’ve since transitioned to a hybrid model where humans and agents collaborate side by side. After a year of experimentation, one truth became undeniable: AI agents reduce the cost of typing, not the need for thoughtful planning. The real work has moved upstream, into specifications, contracts, and checklists. The discipline required to succeed isn’t the agile playbook I was trained on—it’s something far more deliberate and rigorous.
Here are the four key shifts I wish I had adopted from day one.
From Code to Contracts: The New Priority
AI agents are powerful, but they’re only as effective as the instructions they receive. Hand an agent a vague task like "improve the login flow," and you’ll get wildly different interpretations—fast. But provide a detailed specification outlining the contract, error boundaries, integration points, and concrete examples, and multiple agents can work in parallel without stepping on each other’s toes.
Velocity without a contract isn’t speed—it’s chaos.
The "plan" I’m advocating isn’t a bloated 200-page document. It’s a carefully structured set of artifacts that eliminate ambiguity about the system’s boundaries:
- Modular, numbered specifications that other documents and agents can reference by ID, ensuring consistency across the codebase.
- Architecture decision records (ADRs) documenting the reasoning behind non-obvious choices, so future engineers understand the trade-offs.
- Package-level READMEs explaining local constraints, invariants, and workarounds that wouldn’t make sense in a global spec.
- Code comments highlighting the surprising or non-obvious—like why a particular loop exists or what workaround addresses an edge case.
Each layer answers a distinct question: what the system does, why one approach was chosen over another, how to work within a module, and what assumptions might trip up the next reader. Conflating these layers is how plans degrade into noise.
And here’s a detail that sounds trivial but makes a massive difference: the plan lives in the repository, in Markdown. Not in a cloud drive, not in a wiki behind authentication, not in a tool that requires manual context injection. If the agent can’t read the plan directly from the repo in a format it can parse, you’ve lost the leverage the plan was supposed to provide. Markdown is the universal format—diff-able, version-controlled, and natively readable by every model on the market. If your spec is a .docx, your agents are working from a second-rate copy of it.
What belongs in the plan? Anything that must remain consistent across implementers: schemas, interface boundaries, error shapes, role matrices, observability contracts, and deployment sequences. What stays out? Internal algorithms, private utilities, and low-level implementation details—the exploratory work that happens behind the seams.
Crucially, this isn’t waterfall. The plan evolves, but the contracts within it remain stable. That stability is what allows parallel work to stay convergent, letting teams move at AI speed without collisions.
The upfront cost—days or even weeks of writing before the first agent runs—is quickly recouped when the team operates in parallel without costly rework.
Why Agile’s Playbook Isn’t Enough
Agile methodologies revolutionized software development by prioritizing adaptability, collaboration, and iterative progress. But in an AI-driven team, the classical artifacts of project management—work breakdown structures, dependency graphs, and acceptance-criteria-first tickets—aren’t relics of the past. They’re the foundation that makes autonomy safe.
This isn’t a rejection of agile principles. It’s a recognition that agile no longer sits at the bottom of the stack. Human teammates absorb context through osmosis—standups, Slack threads, hallway conversations—and recover from ambiguity in real time. An AI agent has none of that. It operates with a single prompt, the artifacts it can access on disk, and no ability to fill in gaps mid-task.
This asymmetry forces three critical habit shifts:
- Task decomposition must account for data dependencies, not just logical ones. A backend agent waiting on a schema migration can’t start until the migration is merged. A frontend agent consuming an API can’t begin until the spec is finalized. The dependency graphs that project managers sketched on whiteboards in 2008 are now load-bearing—because an agent will confidently hallucinate a missing column if you don’t explicitly define the contract.
- Acceptance criteria must precede implementation. In traditional agile, criteria often emerge during refinement or sprint planning. With AI agents, the criteria must be locked down before work begins, because the agent can’t infer them. This turns the traditional backlog into a specification-first pipeline where ambiguity is eliminated before the first line of code is written.
- Change control becomes explicit. Agile teams thrive on flexibility, but AI agents require stability in the contracts they rely on. A change to an API schema or a database field must be gated by a documented process—otherwise, every downstream agent breaks. This reintroduces elements of governance that agile often seeks to avoid, but in an agentic world, they’re necessary for stability.
The result? A system where humans focus on high-level design and agents handle execution—without the constant firefighting of diverging assumptions.
The Hybrid Workflow: Humans and Agents in Sync
Adopting AI agents doesn’t mean abandoning human creativity. Instead, it redefines roles. Humans become architects of systems, while agents act as parallel implementers operating within defined boundaries.
This hybrid model relies on three pillars:
- Explicit roles and permissions. Agents need clear boundaries on what they can modify, query, or deploy. A poorly defined role matrix leads to agents attempting to edit files they shouldn’t touch or calling APIs with incorrect permissions.
- Observability contracts. Every component must expose metrics, logs, and traces in a standardized format. Without this, agents can’t debug their own work—or yours.
- Iterative refinement loops. The plan isn’t static. As agents execute tasks, they uncover edge cases, performance bottlenecks, or integration gaps. These findings feed back into the spec, creating a continuous loop of improvement where the plan evolves alongside the implementation.
The key is treating the plan as a living document—one that’s updated in lockstep with the codebase. Pull requests should include not just code changes but spec updates, ensuring the contract remains accurate. This turns traditional code reviews into spec reviews, where the focus shifts from does this code work? to does this code match the contract?
The Upfront Cost Pays Off in Parallel Speed
The most common objection to this approach is the time investment. Writing detailed specs before any code is written feels counterintuitive in a world that values rapid iteration. But the math is simple: one hour spent writing a clear spec saves ten hours of debugging misaligned implementations.
Consider a team working on a new feature. With a vague spec, each developer might interpret requirements differently, leading to conflicting implementations, manual fixes, and delayed deployments. With a detailed plan, multiple agents can work in parallel, each adhering to the same contract. The result? Faster delivery, fewer errors, and a system where the real work—the design of robust interfaces and contracts—gets the attention it deserves.
This isn’t a return to waterfall. It’s an evolution. The work has shifted from typing to thinking, from sprints to specs, and from ambiguity to clarity. The teams that adopt this mindset early will reap the rewards—not just in speed, but in the quality of the systems they build.
As AI agents become more capable, the gap between teams that plan meticulously and those that don’t will only widen. The choice is clear: adapt now, or fall behind.
AI summary
AI ajanlarıyla çalışan ekipler için planlama, koddan daha kritik hale geldi. İşte ajansız çalışmanın başarı formülü: spesifikasyon, sözleşmeler ve bağımlılık yönetimi.