The challenge of preserving knowledge across AI agent sessions has led to complex database stacks—Postgres, pgvector, Neo4j—each adding overhead. What if the solution could be simpler than a notebook? A new open-source project called wuphf introduces a Karpathy-style wiki layer where AI agents read and write to a shared knowledge base stored in plain Markdown and Git, proving that durable, portable memory doesn’t require heavy infrastructure.
Wuphf’s approach flips the script on traditional agent memory systems by grounding knowledge in the most universal formats: text files and version control. Instead of locking facts into a vector database or a graph, it leverages Bleve for fast text search, SQLite for structured metadata, and Git for provenance. The result is a system where agents can compound context over time, review contributions like code changes, and take their knowledge with them when they leave.
How Agents Share Knowledge Without Losing Context
Wuphf structures knowledge into two main spaces: private notebooks and a shared team wiki. Each agent receives a dedicated notebook in agents/{agent-slug}/notebook.md, where it drafts observations, research, and working notes. These drafts can be reviewed—by another agent or a human—before being promoted to the canonical wiki with a backlink, ensuring only validated knowledge becomes part of the permanent record.
The promotion flow is managed by a lightweight state machine that handles expiry and auto-archiving. For example, entries past their shelf life are moved to an archive, while active knowledge remains accessible. This keeps the wiki clean without manual curation.
To track factual consistency, wuphf maintains per-entity append-only logs in team/entities/{kind}-{slug}.facts.jsonl. A background worker synthesizes these logs into concise entity briefs every N facts, ensuring summaries stay current. All commits are signed by "Pam the Archivist," a dedicated bot identity, so every change is traceable in the Git history.
The system also supports [[wikilinks]]—internal cross-references similar to Obsidian or Roam—with visual cues for broken links. A daily lint cron scans for contradictions, stale entries, and dead links, keeping the knowledge base reliable even as it scales.
Retrieval That Balances Speed and Citation
Wuphf doesn’t just store knowledge—it helps agents retrieve it efficiently. It offers a /lookup slash command and an MCP tool for cited retrieval. Queries are routed intelligently: short factual lookups go to Bleve’s BM25 index for speed, while narrative or explanatory queries trigger a cited-answer loop that pulls multiple sources together with citations.
Under the hood, the retrieval stack is intentionally minimal. Markdown files store durable knowledge, SQLite tracks metadata, and Bleve provides fast keyword search. Vector search is not included yet—SQLite’s sqlite-vec is on standby if recall on factual queries drops below 85% on the internal benchmark. With 500 artifacts and 50 test queries, Bleve alone achieves 85% recall@20, meeting the minimum bar for production use.
Canonical IDs are central to stability. Each fact gets a deterministic ID that includes its sentence offset, ensuring reproducibility. Slugs are assigned once, merged via redirects, and never renamed—even if the underlying content evolves. This makes rebuilds logically consistent, even if the byte output differs.
Why Markdown and Git Over Databases
The choice to avoid Postgres, Neo4j, or Kafka was deliberate. Markdown files are human-readable, version-control-friendly, and survive long after the runtime. Git provides provenance, collaboration, and portability—you can clone the entire wiki and take it with you. SQLite adds structure for metadata without overloading the stack.
This design prioritizes durability and simplicity. The wiki isn’t just for agents—it’s for humans too. Any team member can inspect the history, review changes, or fork the knowledge base. The system is also modular: you don’t need to adopt the full WUPHF office to use the wiki layer. If you already run an agent like Claude Code or OpenClaw, you can point wuphf at it and start collaborating.
Known limitations include ongoing recall tuning—85% isn’t a universal guarantee—and synthesis quality that depends on input data quality. The lint pass helps, but wuphf isn’t a judgment engine; it reflects what agents observe. For now, the system is scoped to a single office, with no plans for cross-office federation.
Try It Now
Wuphf ships as part of WUPHF, an open-source collaborative office for AI agents. Installation takes seconds:
npx wuphf@latestA five-minute terminal demo shows how an agent records facts, triggers synthesis, and commits the result under Pam the Archivist’s identity. The script at ./scripts/demo-entity-synthesis.sh walks through the full flow—ideal for teams testing agentic memory without complexity.
For skeptics asking why not use an Obsidian vault with a plugin, the answer lies in scalability and automation. Wuphf is designed for continuous, agent-driven updates with built-in governance, not just static notes. It’s a bet on lightweight infrastructure—one that could redefine how AI agents remember what matters.
AI summary
Yapay zeka ajanlarının bilgiyi sürekli güncelleyebileceği bir wiki sistemi olan WUPHF’i keşfedin. Markdown ve Git ile çalışan bu açık kaynaklı araç hakkında detaylar burada.


