Your AI agent forgets everything the moment the session ends. Here is why — and how to fix it.

LadybugDB for Edge Agent AI memory
Seasoned Developer's Journey from COBOL to Web 3.0, SSI, Privacy First Edge AI, and Beyond
https://leanpub.com/ladybugdb

The Problem Everyone Ignores

The entire AI industry is obsessed with making context windows bigger. 128K tokens. A million tokens. But a bigger buffer is still a buffer. It is expensive, ephemeral, and vanishes the moment the conversation closes.

Vector stores help — they find text that sounds like your query. But try asking: "Who introduced me to the person who recommended the book that changed my approach?" A vector store cannot answer that. It has no concept of relationships, causality, or time.

Your agent does not need a bigger context window. It needs memory.

What If Your Agent Had a Brain?

Imagine an AI agent that remembers entities, facts, events, and subjective experiences — organized in a graph that mirrors how human cognition actually works. An agent that can traverse causal chains, reason across time, and reconstruct narratives from weeks or months ago. An agent that runs entirely on the user's device, with zero cloud dependency and complete privacy.

This is not hypothetical. This is what you can build with the architecture described in "LadybugDB for Edge Agent AI Memory."

Seven Ideas That Change How You Build Agents

The book walks you through seven foundational concepts, each building on the last:

1. Private AI Starts at the Edge

Cloud-only AI is a privacy liability and a latency bottleneck. An embedded graph database runs inside your application process — no server, no network hop. The memory graph is a file that travels with the agent. Privacy is not a feature; it is an architectural guarantee. The book shows you exactly how to set this up with LadybugDB.

2. Graphs on the Edge Power Agents

Flat retrieval — whether context stuffing or vector search — cannot do relational reasoning. Graph traversal can. The book teaches you the property graph model and the Cypher query language from first principles, then shows how agents use multi-hop traversal, temporal filtering, and causal chain reconstruction to recall knowledge that no embedding can capture.

3. LadybugDB Superpowers

LadybugDB is not just "a graph database that happens to be embedded." Typed schemas that double as formal ontologies. Native HNSW vector indexes alongside the graph. Columnar storage with vectorized query processing. Recursive path queries for chains of unknown depth. ACID transactions so your agent's memory is never corrupted. The book gives you hands-on experience with every one of these capabilities.

4. Metagraphs and Hypergraphs

Real knowledge does not fit into binary relationships. "Alice and Bob co-authored a paper with Carol at a conference in 2024" is a multi-party relationship with its own properties. The book takes you from simple triples to hypergraphs to metagraphs — relationships about relationships — and shows you the bipartite layered pattern that makes all of this practical in standard Cypher.

5. Semantic Spacetime

Every relationship in a graph falls into one of four fundamental types: LEADS_TO (causality), CONTAINS (hierarchy), EXPRESSES (meaning), and NEAR (proximity). This classification, drawn from Mark Burgess's work, turns a bag of connections into a navigable knowledge space. The book shows you how to implement this universal grammar and use it for structured traversal and reasoning.

6. Agentic Memory

The book constructs a complete, production-ready memory ontology step by step: entities, facts, events, and memories — each layer adding structure that enables richer recall. You will build typed edge node tables, polymorphic relations, a time tree for temporal anchoring, and vector indexes for semantic search. This is not a toy example. It is the architecture behind the open-source Ladybug Memory library.

7. Promise Graphs

The final frontier. Traditional multi-agent systems think in commands — Agent A tells Agent B what to do. Promise theory flips this: agents voluntarily commit to what they will deliver. The book introduces a six-layer promise graph architecture — data traces, promises, assessments, intent, decisions, and results — all modeled as first-class graph nodes with semantic spacetime classification. Promise graphs tell you not just what happened but what was supposed to happen, why it didn't, and what to learn from it.

The Synthesis: Memory + Promises = Powerful Agents

Here is the insight the book builds toward:

  • Memory graphs capture what the agent knows — entities, facts, events, subjective memories.

  • Promise graphs capture what the agent does — commitments, actions, assessments, outcomes.

Together, they form a complete cognitive architecture. The memory graph informs decisions. The promise graph records those decisions and their consequences. Assessment feeds back into memory, updating trust and learned patterns. All of it persists on-device, traversable in microseconds, private by design.

This is what it means to go beyond the context window.

What You Get

The book is structured as a deliberate arc from foundations to architecture:

  • Chapters 0–2: Property graphs, Cypher, and the embedded database advantage

  • Chapters 3–4: Typed schemas as ontology, subgraphs for isolation

  • Chapters 5–7: Hypergraphs, metagraphs, semantic spacetime

  • Chapter 8: Vector indexes and hybrid graph+vector queries

  • Chapters 9–10: The complete agentic memory ontology, step by step

  • Chapter 11: Graph algorithms for knowledge analysis

  • Chapter 12: Ladybug Memory — the working open-source implementation

  • Chapters 13–14: Promise graphs and the promise graph ontology

Every chapter includes executable Cypher examples. No prior graph database experience required — the book starts from first principles.

Who This Book Is For

You are building AI agents, personal assistants, or knowledge systems that need to remember. You are tired of workarounds — stuffing context, re-ingesting documents, losing everything between sessions. You want a real architecture for persistent, structured, queryable agent memory that runs on the edge, respects privacy, and enables reasoning that flat retrieval cannot touch.

"LadybugDB for Edge Agent AI Memory" gives you that architecture, from theory to working code.

The next generation of AI will not simply compute. It will remember.


Get the book on Leanpub and start building agents that remember.