The problem no one talks about
Most AI agent architectures treat memory as an afterthought — a flat list of conversation turns, a vector store bolted on the side, or a remote database that the agent calls over a network. These approaches all share the same fundamental flaw: they model memory as retrieval, not as structure.
Memory isn't a search index. It's a world model. An agent that can only look things up cannot reason about how things relate, how they change over time, or who made what promise to whom. That distinction matters enormously once you try to build agents that do anything more complex than answer a question.
For the past two years, building memory systems for agents running entirely on user devices, I was searching for a database that could hold a proper world model — typed, graph-structured, vector-enriched, and embedded without cloud dependencies. The answer came from an unexpected direction.
A brief history: from SQLite workarounds to LadybugDB
The first book in this series — Pocket Knowledge Graphs — documents what happens when you try to build a graph database without a graph database. The approach works: using SQLite with careful schema design and vector extensions like sqlite-vec, you can implement graph-like structures, store embeddings, and perform semantic queries — all inside a single file that runs anywhere. I wrote that book because I couldn't recommend a better embedded option in good conscience.
The real answer was supposed to be Kuzu. Kuzu is a full-featured embedded graph database with native Cypher support, columnar storage, and the kind of query performance that makes graph traversal feel instant. I was following it closely throughout that writing process. Then, shortly before I finished, Kuzu's original team announced they were discontinuing the project. The chapters I had drafted about Kuzu did not make it into the book.
That was a genuine setback. Embedded graph databases are rare, and one that was actually production-quality was rarer still.
The community, however, picked it up. Several forks appeared. The most technically serious of them is LadybugDB — a continuation of Kuzu's codebase, now actively extended toward a specific purpose: agentic memory. The people building LadybugDB are not maintaining a general-purpose database. They are building the memory layer for sovereign AI agents. That alignment of purpose with my own research is what prompted the second book.
What LadybugDB actually is
LadybugDB inherits Kuzu's core architecture: an embedded graph database that runs in-process, stores data in a columnar format optimized for graph workloads, and speaks Cypher as its query language. On top of that foundation, it adds capabilities that matter specifically for agent memory.
Strongly typed graphs. LadybugDB enforces a schema on your graph. Nodes and relationships have declared types, and those types are validated at write time. This is not a full ontology system, but it is far more than a property bag. For agent memory systems, where the integrity of relationships between concepts determines the quality of reasoning, schema enforcement is not optional.
Native vector indexes. Alongside graph structure, LadybugDB stores vector embeddings and supports approximate nearest-neighbor search via HNSW indexes. This means a single database handles both structural queries ("what does this entity connect to, and how?") and semantic queries ("what concepts are similar to this one?"). Graph and vector search composing together in one query is what makes local RAG architectures genuinely useful rather than a toy.
Embedded, local-first, private. LadybugDB runs on the device. There is no API call to a cloud service, no data leaving the machine, no third-party dependency in the inference path. For agents that handle personal information — and eventually all agents will — this is not a feature. It is the baseline.
What the book covers
The Memory Graph book is not a database manual. Cypher syntax and schema definition appear because you need them, but they are not the point. The point is the architecture of agent memory — what it needs to represent, why flat retrieval is insufficient, and how graph structure enables the reasoning patterns that make agents capable of acting in a world rather than just responding to prompts.
The book moves through several layers.
Hypergraphs and metagraphs. Standard property graphs model binary relationships between nodes. But real-world knowledge is frequently higher-order: a relationship between three or more entities, or a relationship that is itself an entity that other relationships point to. Hypergraphs generalize graphs to handle this. Metagraphs allow relationships between relationships. Both are necessary for representing the complexity of real agent environments, and both are implementable in LadybugDB with the right schema design.
Semantic Spacetime as a property graph. Semantic Spacetime is a formal ontology developed by Mark Burgess that provides a principled account of how agents perceive, categorize, and act in the world. It defines four fundamental relation types — NEAR/SIMILAR_TO, LEADS_TO, CONTAINS, EXPRESSES_PROPERTY — and from these derives a complete account of agency, memory, and causality. The book shows how to implement Semantic Spacetime directly as a LadybugDB schema, giving agent memory a theoretically grounded structure rather than an ad hoc one.
Promise graphs. Promise Theory, also from Burgess, models multi-agent systems in terms of autonomous commitments rather than imposed obligations. An agent does not receive commands — it makes and receives promises. The memory implications of this model are significant: the agent needs to track not just facts about the world, but the structure of commitments, their sources, their conditions, and their outcomes. The book implements a Promise Graph schema in LadybugDB and uses it as the basis for audit trails and social memory across agent networks.
Why memory is not RAG. Retrieval-Augmented Generation is a useful pattern for injecting relevant context into a language model's prompt. It is not a memory architecture. The difference is structural: RAG retrieves text; a memory graph represents relationships. An agent consulting a memory graph can traverse causal chains, identify contradictions, track the provenance of beliefs, and reason about what it does not know. None of this is possible from a vector search alone. The book is explicit about this distinction and about the failure modes that result from confusing retrieval with memory.
Who this is for
If you are building AI agents that need to run locally, handle private data, or maintain coherent state across sessions — this book is the technical foundation for that work.
If you are interested in the theoretical side — knowledge representation, agent ontology, formal memory models — the book covers that ground with concrete implementations rather than abstract description.
If you are a practitioner who has been told that a vector database and a system prompt is all the memory an agent needs, the book is a structured argument for why that is insufficient and what to build instead.
Where to get it
The book is available on Amazon, but I recommend the
. Leanpub lets me push new chapters and revisions immediately — Amazon's publishing process introduces delays of days. Given how quickly LadybugDB is developing, the Leanpub version will consistently be more current.
Supporting the book also supports LadybugDB itself. The project is community-driven, and the book is part of how that community documents and develops the patterns needed to build serious agent memory systems.
Next in this series: world models for agents — not memory, but the fundamental representation of the environment an agent must reason about in order to make decisions.