iToverDose/Software· 30 APRIL 2026 · 20:07

How to build a 10-agent AI product team with Claude Code

Replacing a solo AI workflow with a structured team of specialized agents can cut costs by 40% while improving product quality. Here’s how one engineer set it up using only markdown files and a single Claude subscription.

DEV Community6 min read0 Comments

Building a real product requires more than a single AI agent—it demands a structured team. Context drift, decision fatigue, and the inability to challenge its own work often lead even the smartest models to deliver mediocre output. That’s why one engineer replaced a solo AI workflow with a coordinated team of 10 specialized agents, slashing costs and improving outcomes using only markdown files and a single Claude Max subscription.

The new setup eliminates the overhead of gateway servers, WebSocket connections, and agent process management while reducing token costs by running procedural agents on Sonnet 4.6 instead of Opus 4.6. Each agent plays a distinct role—from market researcher to security auditor—ensuring no step is skipped and no assumption goes unchallenged. The result is a scalable, auditable, and cost-efficient product development pipeline that runs entirely within a local folder structure.

Why a solo AI agent isn’t enough

A single large language model may seem capable, but it struggles with consistency and self-critique. Context drift erodes focus over long sessions, decision fatigue sets in when every choice demands attention, and the model’s inability to rigorously challenge its own assumptions leads to flawed outputs—even when the model itself is highly capable.

This is especially true when building real products, not side projects. A solo agent lacks the specialized roles needed at each stage: someone to validate the idea, another to design the architecture, a third to write the PRD, and yet another to audit the code. Without these roles, the result is often a brittle prototype rather than a robust, shippable product.

Recognizing these limitations, the engineer shifted from a single-agent setup to a structured team of agents, each responsible for a specific function in the product development lifecycle. The team spans eight stages—from ideation to go-to-market—with built-in quality gates and cross-agent reviews to ensure nothing escapes oversight.

From OpenClaw to markdown-only teams

Initially, the team was orchestrated using OpenClaw, an open-source AI assistant framework with over 300,000 GitHub stars. OpenClaw allowed the creation of specialized agents with defined personas, tools, and skills, connected via messaging platforms like Slack and Discord. The setup worked, but the infrastructure overhead was significant.

Running a gateway server, managing WebSocket connections, configuring ports, and handling agent process management introduced real friction. Every agent session triggered API calls, and token costs accumulated quickly as multiple agents performed deep work. Worse, agents could only communicate through the orchestrator, never directly, which slowed down decision-making and made debugging handoff failures tedious. There was no native support for test-driven development (TDD), design alternatives, or cross-model code reviews.

Then the engineer discovered Claude Code’s Agent Teams feature, which lets you coordinate multiple Claude Code sessions as a team. Each agent runs in its own context window and can message others directly, eliminating the need for a centralized orchestrator. Agents are defined as markdown files in a .claude/agents/ directory, making the setup lightweight and portable. This shift reduced infrastructure overhead to zero and cut token costs by assigning procedural agents to Sonnet 4.6 while reserving Opus 4.6 for agents needing deeper reasoning.

The structure: 10 agents, 0 servers

The team is organized into a simple folder structure within the project root:

your-project/
├── CLAUDE.md                  # Shared architecture and coding standards
└── .claude/
    ├── settings.json          # Enables Agent Teams and sets the lead agent
    ├── project-config.md      # Project identity, paths, and identifiers
    └── agents/
        ├── athina.md          # Lead orchestrator and project manager
        ├── scout.md           # Market researcher
        ├── spectra.md         # PRD writer
        ├── pixel.md           # Designer and architect
        ├── builder.md         # Engineer
        ├── auditor.md         # Compliance reviewer
        ├── bugsy.md           # QA tester
        ├── piper.md           # DevOps engineer
        ├── nova.md            # Marketing lead
        └── quill.md           # Content writer

Each agent is a markdown file containing its system prompt, role definition, and personality. The setup requires no servers, no WebSocket configurations, and no external services—just markdown files in a folder. The lead agent, Athina, coordinates the team and enforces sequential stage execution, ensuring no step is skipped and all quality gates are passed.

The team is split between Opus 4.6 and Sonnet 4.6 models. Agents requiring open-ended reasoning (Athina, Scout, Spectra, Pixel, Builder) use Opus 4.6, while procedural agents (Auditor, Bugsy, Piper, Nova, Quill) run on Sonnet 4.6. This allocation reduces token costs without sacrificing quality on tasks that don’t require deep reasoning.

Introducing the team members

Athina: the lead project manager

Athina serves as the brain of the operation, acting as both orchestrator and accountability partner. She manages Linear issues before any agent begins work, updates project context files at critical touchpoints, and runs "Grill Me" sessions where she challenges assumptions and pushes for velocity.

Her prompts emphasize momentum and decision-making. She doesn’t just report completion—she immediately proposes the next action and nudges for approvals. If the human lead ("Mir") stalls, she gently reminds them, ensuring the pipeline keeps moving.

Scout: validating ideas before they begin

Scout handles Stage 2, validating product ideas by analyzing market fit, competitors, and target personas. She delivers a Go/No-Go recommendation based on data, not assumptions, preventing wasted effort on unviable concepts.

Spectra: generating alternatives before writing PRDs

Spectra operates in Stage 3 and introduces a crucial step before PRD creation: proposing multiple product directions with trade-offs. She presents three viable approaches—each with scope, timeline, and risk assessments—then lets the human lead choose one. Only after that choice does she draft the PRD for the selected direction, eliminating scope creep before it starts.

For example:

  • Approach A: Full platform (25 requirements, 8 weeks, max market share)
  • Approach B: Focused MVP (15 requirements, 4 weeks, fast validation)
  • Approach C: API-first (12 requirements, 3 weeks, developer market)

Spectra recommends Approach B for fastest time to market, but the final call rests with the human lead.

Pixel: designer and architect with alternatives

Pixel, the designer and architect, works in three phases. First, she proposes two or three architectural approaches. Then she creates HTML mockups of three different UI designs. Finally, she refines the chosen design based on feedback. This iterative process ensures the final product aligns with user needs and technical constraints.

The workflow: structured stages with quality gates

The team follows a strict eight-stage pipeline:

  1. Ideation
  2. Research & feasibility
  3. PRD writing
  4. Design & architecture
  5. Engineering
  6. QA & staging
  7. Production deployment
  8. Go-to-market

Each stage has built-in quality gates. An auditor reviews every deliverable before it advances, and the lead agent enforces sequential execution. Grill Me sessions challenge assumptions at every major milestone, ensuring no oversight slips through.

The system is designed to keep the human lead in control while automating the heavy lifting. Linear issues are created automatically before any agent begins work, and project context is updated at eight critical touchpoints per stage, maintaining traceability and consistency.

The bottom line: scalable, auditable, and cost-efficient

By replacing a server-dependent AI orchestration system with a markdown-only team, the engineer eliminated infrastructure overhead, reduced token costs, and improved product quality through specialization and cross-agent reviews. The result is a scalable, auditable development pipeline that runs entirely within a local folder structure.

This approach proves that a structured team of AI agents—each with a defined role and personality—can outperform a single LLM while staying lightweight and cost-effective. As AI models continue to evolve, teams like this one will redefine how products are built, validated, and shipped.

AI summary

Tek bir yapay zeka ajanının yetersiz kaldığı noktaları keşfedin. On farklı uzman ajanı bir araya getirerek ürün geliştirme sürecini nasıl hızlandırıp maliyetleri nasıl düşürebileceğinizi öğrenin.

Comments

00
LEAVE A COMMENT
ID #WLE9X9

0 / 1200 CHARACTERS

Human check

2 + 5 = ?

Will appear after editor review

Moderation · Spam protection active

No approved comments yet. Be first.