Building expressive AI memory structures on a graph database you can actually run today
In my previous article I made a case for why the humble triple fails us as a memory model, and why the path forward runs through hypergraphs, metagraphs, and ultimately the bipartite layered graph as a practical implementation target. I ended with a promise: LadybugDB and Kùzu-compatible Cypher can express all of this natively.
This article makes good on that promise. We will go from first principles — what LadybugDB actually is, how its schema works — all the way to a running semantic spacetime schema with causality, temporal awareness, and graph clustering. Every code block is executable against a Kùzu-compatible engine today.
LadybugDB Fundamentals
What LadybugDB Is (and Why It Matters for Memory)
LadybugDB is a property graph database optimized for object storage and edge-device deployment, built on the Kùzu engine. It speaks Cypher, stores nodes and relationships in typed tables, and runs embedded — meaning it lives inside your agent process rather than as a remote server. For AI memory workloads, this matters enormously: round-trip latency disappears, the graph travels with the agent, and the whole thing can fit on a phone.
The key architectural idea borrowed from Kùzu is that both nodes and relationships are defined as typed tables with explicit schemas. This is different from property graph databases that allow arbitrary key-value bags on any node. In LadybugDB, you declare what a node is before you create one — and that declaration is enforced.
How Table Definitions Work as Ontology
In traditional knowledge engineering, an ontology is a formal specification of types, properties, and relationships in a domain. In LadybugDB, your CREATE NODE TABLE and CREATE REL TABLE statements are your ontology. They are not documentation — they are the schema the database enforces at write time.
Consider this declaration:
CREATE NODE TABLE EntityNode ( id STRING, label STRING, kind STRING, layer STRING, learned_at TIMESTAMP, expired_at TIMESTAMP, PRIMARY KEY (id));This is not just a storage hint. It says: every entity in this knowledge graph has a kind (what type of thing it is), a layer (what level of abstraction it lives at), and a temporal lifespan. Any attempt to create an EntityNode without these fields fails. The schema enforces the ontological commitment.
The same logic applies to relationship tables:
CREATE REL TABLE CONNECTS ( FROM EntityNode TO SimilarEdge | ContainsEdge | HasPropertyEdge | LeadsToEdge);This declaration encodes a fundamental ontological rule: in this knowledge graph, entities connect to edge-nodes, never to other entities directly. The bipartite constraint is not a convention — it is baked into the rel table definition and enforced by the engine.
The practical consequence: when you read the schema, you read the ontology. When the ontology changes, you migrate the schema. There is no gap between the two.
Polymorphic Relations: One Rel Table, Many Types
Classical graph databases give you one relationship type per rel table. LadybugDB (via Kùzu) supports polymorphic FROM and TO declarations — a single relationship table can wire multiple node types.
This is powerful because it lets us express the bipartite rule in exactly two tables:
-- Every departure from an entity goes through CONNECTSCREATE REL TABLE CONNECTS ( FROM EntityNode TO SimilarEdge | ContainsEdge | HasPropertyEdge | LeadsToEdge);-- Every arrival at an entity goes through BINDSCREATE REL TABLE BINDS ( FROM SimilarEdge | ContainsEdge | HasPropertyEdge | LeadsToEdge TO EntityNode);Two tables. The complete relationship vocabulary of the graph. Any attempt to create a CONNECTS edge from a SimilarEdge or a BINDS edge to a ContainsEdge is rejected at the schema level — not at the application level, not at query time, but at the moment of write.
When you query, you address both tables uniformly:
MATCH (src:EntityNode)-[:CONNECTS]->(edge)-[:BINDS]->(tgt:EntityNode)RETURN src.label, labels(edge)[0] AS relation_type, tgt.label;The pattern (Entity)-[:CONNECTS]->(EdgeNode)-[:BINDS]->(Entity) becomes the universal traversal idiom. You never write entity-to-entity hops — because they cannot exist.
Basic Cypher for Beginners
If you are new to Cypher, here are the five patterns you need for everything in this article.
Creating nodes:
CREATE (:EntityNode { id: 'e:berlin', label: 'Berlin', kind: 'place', layer: 'instance', learned_at: timestamp('2025-01-01T00:00:00'), expired_at: NULL});Creating relationships:
MATCH (a:EntityNode {id: 'e:berlin'}), (e:ContainsEdge {id: 'ce:berlin-mitte'})CREATE (a)-[:CONNECTS]->(e);Reading with filters:
MATCH (src:EntityNode)-[:CONNECTS]->(edge:SimilarEdge)-[:BINDS]->(tgt:EntityNode)WHERE edge.similarity > 0.7 AND edge.expired_at IS NULLRETURN src.label, edge.similarity, tgt.label;Updating a property:
MATCH (e:SimilarEdge {id: 'se:km'})SET e.similarity = 0.91;Expiring a node (soft delete):
MATCH (e:LeadsToEdge {id: 'le:bk'})SET e.expired_at = timestamp('2025-06-01T00:00:00');That is genuinely the core vocabulary. Everything else is combinations of these five operations.
The Semantic Spacetime Schema
Now we build the actual schema. The term “semantic spacetime” refers to a knowledge representation where meaning is organized along two orthogonal axes: semantic space (what things are and how they relate) and time (when knowledge was acquired and when it expires). Every node in the graph lives somewhere in this four-dimensional fabric.
The Full Schema
-- ============================================================-- NODE TABLES-- ============================================================CREATE NODE TABLE EntityNode ( id STRING, label STRING, kind STRING, -- "concept" | "actor" | "event" | "place" | "value" layer STRING, -- "core" | "domain" | "instance" | "meta" learned_at TIMESTAMP, expired_at TIMESTAMP, PRIMARY KEY (id));CREATE NODE TABLE SimilarEdge ( id STRING, layer STRING, kind STRING, -- "semantic" | "structural" | "functional" context STRING, similarity DOUBLE, -- [0.0 … 1.0] learned_at TIMESTAMP, expired_at TIMESTAMP, PRIMARY KEY (id));CREATE NODE TABLE ContainsEdge ( id STRING, layer STRING, kind STRING, -- "spatial" | "categorical" | "temporal" | "logical" context STRING, probability DOUBLE, -- [0.0 … 1.0] learned_at TIMESTAMP, expired_at TIMESTAMP, PRIMARY KEY (id));CREATE NODE TABLE HasPropertyEdge ( id STRING, layer STRING, kind STRING, -- "intrinsic" | "relational" | "derived" property_name STRING, learned_at TIMESTAMP, expired_at TIMESTAMP, PRIMARY KEY (id));CREATE NODE TABLE LeadsToEdge ( id STRING, layer STRING, kind STRING, -- "causal" | "sequential" | "inferential" | "temporal" context STRING, learned_at TIMESTAMP, expired_at TIMESTAMP, PRIMARY KEY (id));-- ============================================================-- 2 POLYMORPHIC RELATIONSHIP TABLES (bipartite enforcement)-- ============================================================CREATE REL TABLE CONNECTS ( FROM EntityNode TO SimilarEdge | ContainsEdge | HasPropertyEdge | LeadsToEdge);CREATE REL TABLE BINDS ( FROM SimilarEdge | ContainsEdge | HasPropertyEdge | LeadsToEdge TO EntityNode);The Four Semantic Spacetime Relations
The four edge-node types are not arbitrary — they map to the four fundamental operations of a knowledge space:
Edge Node Semantic Role Spacetime Axis SimilarEdge Two concepts occupy nearby regions of meaning-space Space (proximity) ContainsEdge One concept encloses another in the ontology hierarchy Space (topology) HasPropertyEdge A concept carries an attribute at a point in time Space + Time LeadsToEdge One event or state precedes and influences another Time (causality)
Together they are sufficient to express any relationship in a knowledge graph. Similarity covers analogical reasoning. Containment covers taxonomic and compositional structure. Property attribution covers state. Causality covers dynamics.
How Edge Nodes Unlock Hypergraphs and Metagraphs
This is the central insight from my previous article, now made concrete in code.
Edge Nodes as Hyperedges
A standard graph edge connects exactly two nodes. A hyperedge connects any number simultaneously. The classic example: a team meeting involves Alice, Bob, Carol, a room, a topic, and an emotional tone — all at once, not pairwise.
In LadybugDB, we express this by creating one ContainsEdge (or SimilarEdge, etc.) node and connecting it to all participants via CONNECTS and BINDS. The edge-node is the hyperedge made first-class.
-- The "Q1 Planning Meeting" as an event entityCREATE (:EntityNode { id: 'e:q1-meeting', label: 'Q1 Planning Meeting', kind: 'event', layer: 'instance', learned_at: timestamp('2025-01-15T09:00:00'), expired_at: NULL});-- Alice, Bob, Carol, the room, and the topic as entitiesCREATE (:EntityNode { id:'e:alice', label:'Alice', kind:'actor', layer:'instance', learned_at: timestamp('2025-01-01T00:00:00'), expired_at: NULL });CREATE (:EntityNode { id:'e:bob', label:'Bob', kind:'actor', layer:'instance', learned_at: timestamp('2025-01-01T00:00:00'), expired_at: NULL });CREATE (:EntityNode { id:'e:carol', label:'Carol', kind:'actor', layer:'instance', learned_at: timestamp('2025-01-01T00:00:00'), expired_at: NULL });CREATE (:EntityNode { id:'e:room-4', label:'Room 4', kind:'place', layer:'instance', learned_at: timestamp('2025-01-01T00:00:00'), expired_at: NULL });CREATE (:EntityNode { id:'e:roadmap', label:'2025 Roadmap', kind:'concept', layer:'domain', learned_at: timestamp('2025-01-01T00:00:00'), expired_at: NULL });-- One ContainsEdge node = one hyperedge connecting all participantsCREATE (:ContainsEdge { id: 'ce:meeting-participants', layer: 'instance', kind: 'temporal', context: 'team:engineering', probability: 1.0, learned_at: timestamp('2025-01-15T09:00:00'), expired_at: NULL});-- Wire the meeting entity to the hyperedgeMATCH (m:EntityNode {id:'e:q1-meeting'}), (ce:ContainsEdge {id:'ce:meeting-participants'})CREATE (m)-[:CONNECTS]->(ce);-- Wire the hyperedge to all participants simultaneouslyMATCH (ce:ContainsEdge {id:'ce:meeting-participants'}), (a:EntityNode {id:'e:alice'}) CREATE (ce)-[:BINDS]->(a);MATCH (ce:ContainsEdge {id:'ce:meeting-participants'}), (b:EntityNode {id:'e:bob'}) CREATE (ce)-[:BINDS]->(b);MATCH (ce:ContainsEdge {id:'ce:meeting-participants'}), (c:EntityNode {id:'e:carol'}) CREATE (ce)-[:BINDS]->(c);MATCH (ce:ContainsEdge {id:'ce:meeting-participants'}), (r:EntityNode {id:'e:room-4'}) CREATE (ce)-[:BINDS]->(r);MATCH (ce:ContainsEdge {id:'ce:meeting-participants'}), (t:EntityNode {id:'e:roadmap'}) CREATE (ce)-[:BINDS]->(t);-- Query: who and what was in the meeting?MATCH (m:EntityNode {id:'e:q1-meeting'})-[:CONNECTS]->(ce:ContainsEdge)-[:BINDS]->(participant)RETURN participant.label, participant.kind;Five entities bound to one hyperedge in six lines of Cypher. No artificial intermediate nodes. The arity of the ContainsEdge node (how many things it BINDS to) directly encodes the cardinality of the original hyperedge.
ContainsEdge as Metagraph: Referencing Other Edge Nodes
The metagraph step is more subtle. In a metagraph, relationships can themselves be related to other relationships. In our schema, because edge nodes are first-class EntityNode-adjacent nodes, we can promote any edge node to act as an entity in a higher-level relation.
The trick is that an EntityNode can represent not just a concrete thing, but a reference to an edge node from a lower layer. We use kind: 'edge-ref' and layer: 'meta' to mark these:
-- Suppose we have two causal relations already in the graph:-- LeadsToEdge 'le:stress-causes-errors' (stress leads to errors)-- LeadsToEdge 'le:errors-cause-rework' (errors lead to rework)-- Promote them to entities so we can reason about the causal chain itselfCREATE (:EntityNode { id: 'eref:stress-causes-errors', label: 'Stress→Errors (causal link)', kind: 'edge-ref', layer: 'meta', learned_at: timestamp('2025-03-01T00:00:00'), expired_at: NULL});CREATE (:EntityNode { id: 'eref:errors-cause-rework', label: 'Errors→Rework (causal link)', kind: 'edge-ref', layer: 'meta', learned_at: timestamp('2025-03-01T00:00:00'), expired_at: NULL});-- Now connect the two causal links with a meta-causal LeadsToEdgeCREATE (:LeadsToEdge { id: 'le:meta-stress-chain', layer: 'meta', kind: 'causal', context: 'team-health-analysis', learned_at: timestamp('2025-03-01T00:00:00'), expired_at: NULL});MATCH (a:EntityNode {id:'eref:stress-causes-errors'}), (l:LeadsToEdge {id:'le:meta-stress-chain'})CREATE (a)-[:CONNECTS]->(l);MATCH (l:LeadsToEdge {id:'le:meta-stress-chain'}), (b:EntityNode {id:'eref:errors-cause-rework'})CREATE (l)-[:BINDS]->(b);We now have a causal link between causal links — meta-causality. The graph has metagraphic depth without requiring a metagraph database. The layering handles it: layer: 'instance' for ground-level facts, layer: 'domain' for semantic groupings, layer: 'meta' for relations about relations.
Causality Relations and Temporal Chains
The LeadsToEdge is the backbone of dynamic memory. It encodes not just "A happened before B" but "A caused B in this context, with this kind of mechanism."
-- Ground level causal factsCREATE (:EntityNode { id:'e:deadline-pressure', label:'Deadline Pressure', kind:'event', layer:'domain', learned_at:timestamp('2025-02-01T00:00:00'), expired_at:NULL });CREATE (:EntityNode { id:'e:late-nights', label:'Late Nights', kind:'event', layer:'instance', learned_at:timestamp('2025-02-01T00:00:00'), expired_at:NULL });CREATE (:EntityNode { id:'e:burnout', label:'Burnout', kind:'event', layer:'domain', learned_at:timestamp('2025-02-01T00:00:00'), expired_at:NULL });-- Pressure leads to late nightsCREATE (:LeadsToEdge { id: 'le:pressure-nights', layer: 'instance', kind: 'causal', context: 'project:alpha', learned_at: timestamp('2025-02-10T00:00:00'), expired_at: NULL});MATCH (a:EntityNode {id:'e:deadline-pressure'}), (l:LeadsToEdge {id:'le:pressure-nights'}) CREATE (a)-[:CONNECTS]->(l);MATCH (l:LeadsToEdge {id:'le:pressure-nights'}), (b:EntityNode {id:'e:late-nights'}) CREATE (l)-[:BINDS]->(b);-- Late nights leads to burnoutCREATE (:LeadsToEdge { id: 'le:nights-burnout', layer: 'instance', kind: 'causal', context: 'project:alpha', learned_at: timestamp('2025-02-20T00:00:00'), expired_at: NULL});MATCH (a:EntityNode {id:'e:late-nights'}), (l:LeadsToEdge {id:'le:nights-burnout'}) CREATE (a)-[:CONNECTS]->(l);MATCH (l:LeadsToEdge {id:'le:nights-burnout'}), (b:EntityNode {id:'e:burnout'}) CREATE (l)-[:BINDS]->(b);-- Multi-hop causal chain queryMATCH (root:EntityNode)-[:CONNECTS]->(l1:LeadsToEdge)-[:BINDS]-> (mid:EntityNode)-[:CONNECTS]->(l2:LeadsToEdge)-[:BINDS]->(leaf:EntityNode)WHERE l1.context = 'project:alpha' AND l1.expired_at IS NULL AND l2.expired_at IS NULLRETURN root.label, l1.kind, mid.label, l2.kind, leaf.label;The result gives you the full causal narrative: Deadline Pressure → (causal) → Late Nights → (causal) → Burnout, all within project:alpha context.
Graph Clustering with Layer and Kind
The layer and kind properties on every node are not decorative metadata. They are the clustering dimensions of the graph.
Layer organizes depth of abstraction:
Layer Meaning Example core Universal primitives "time", "space", "causation" domain Domain-specific concepts "sprint", "trust_score", "memory_trace" instance Concrete occurrences "the meeting on Jan 15", "Alice" meta Relations about relations References to edge-nodes, causal chains of causal chains
Kind organizes semantic subtype within a relation. A ContainsEdge with kind: 'spatial' means physical containment. With kind: 'categorical' it means taxonomic membership. With kind: 'temporal' it means an event encompasses a time window.
Together, layer and kind let you run scoped traversals:
-- Cluster 1: All domain-level similarity relations above 0.8MATCH (src:EntityNode)-[:CONNECTS]->(e:SimilarEdge)-[:BINDS]->(tgt:EntityNode)WHERE e.layer = 'domain' AND e.similarity > 0.8 AND e.expired_at IS NULLRETURN src.label, e.context, tgt.label, e.similarityORDER BY e.similarity DESC;-- Cluster 2: All causal chains at instance layer within a specific contextMATCH (src:EntityNode)-[:CONNECTS]->(l:LeadsToEdge)-[:BINDS]->(tgt:EntityNode)WHERE l.layer = 'instance' AND l.kind = 'causal' AND l.context STARTS WITH 'project:' AND l.expired_at IS NULLRETURN src.label, l.context, tgt.label;-- Cluster 3: The meta-layer — what do we know about our own knowledge?MATCH (src:EntityNode {layer:'meta'})-[:CONNECTS]->(edge)-[:BINDS]->(tgt:EntityNode)WHERE edge.expired_at IS NULLRETURN src.label, labels(edge)[0] AS edge_type, tgt.label;The layer and kind axes let an AI agent ask not just “what do I know?” but “what do I know at what level of abstraction, and how was that knowledge derived?” This is the prerequisite for genuine epistemic humility in an agent — knowing the confidence and provenance of its own beliefs.
Temporality and Dynamic Memory
The learned_at and expired_at fields on every node transform the graph from a static knowledge base into a dynamic memory that evolves through time.
Why Every Node Gets a Timestamp
Temporal awareness is not just for facts that change — it is for the knowledge of those facts that changes. An agent may believe similarity(A,B) = 0.7 based on a 2023 embedding model, and similarity(A,B) = 0.92 based on a 2025 model. Both beliefs are true within their validity windows. The graph must hold both.
By putting learned_at and expired_at on the edge-nodes themselves (not just on entity nodes), we track the lifespan of relationships, not just entities. This is the key distinction: in semantic spacetime, time is not a property of things — it is a property of the knowledge of relations between things.
Temporal Snapshot Queries
-- What was the state of knowledge on a specific date?WITH timestamp('2025-02-01T00:00:00') AS snapshot_timeMATCH (src:EntityNode)-[:CONNECTS]->(edge)-[:BINDS]->(tgt:EntityNode)WHERE edge.learned_at <= snapshot_time AND (edge.expired_at IS NULL OR edge.expired_at > snapshot_time) AND src.learned_at <= snapshot_time AND (src.expired_at IS NULL OR src.expired_at > snapshot_time)RETURN src.label, labels(edge)[0] AS relation, tgt.label;Knowledge Decay and Soft Expiry
Rather than deleting outdated knowledge, we expire it. This preserves the history of how the agent’s beliefs evolved — essential for debugging, auditing, and the kind of temporal reasoning human memory naturally supports.
-- A similarity relation becomes outdated when a new embedding model is deployed-- Rather than DELETE, we expire the old and create the new-- Expire the oldMATCH (e:SimilarEdge {id: 'se:km-v1'})SET e.expired_at = timestamp('2025-06-01T00:00:00');-- Create the updated beliefCREATE (:SimilarEdge { id: 'se:km-v2', layer: 'core', kind: 'semantic', context: 'epistemology', similarity: 0.94, learned_at: timestamp('2025-06-01T00:00:00'), expired_at: NULL});MATCH (a:EntityNode {id:'e:knowledge'}), (e:SimilarEdge {id:'se:km-v2'}) CREATE (a)-[:CONNECTS]->(e);MATCH (e:SimilarEdge {id:'se:km-v2'}), (b:EntityNode {id:'e:memory'}) CREATE (e)-[:BINDS]->(b);The old belief is not gone — it is timestamped. An agent replaying history will see the world as it appeared in June 2024. An agent operating in the present will only see unexpired edges. Both queries are one WHERE clause apart.
The Memory Horizon Query
-- Active knowledge: what does the agent know right now?MATCH (src:EntityNode)-[:CONNECTS]->(edge)-[:BINDS]->(tgt:EntityNode)WHERE edge.expired_at IS NULL AND src.expired_at IS NULL AND tgt.expired_at IS NULLRETURN src.label, labels(edge)[0] AS relation, edge.layer, edge.kind, tgt.label;-- Recent acquisitions: what was learned in the last 30 days?WITH timestamp('2025-05-01T00:00:00') AS thirty_days_agoMATCH (src:EntityNode)-[:CONNECTS]->(edge)-[:BINDS]->(tgt:EntityNode)WHERE edge.learned_at >= thirty_days_ago AND edge.expired_at IS NULLRETURN src.label, labels(edge)[0], edge.learned_at, tgt.labelORDER BY edge.learned_at DESC;Putting It All Together: A Complete Memory Scene
Here is a small self-contained example that exercises every feature — hyperedges, meta-layer, causality, temporal expiry, and layer clustering — in one coherent scene.
-- SCENE: An agent learns about a difficult project retrospective.-- The agent believes the retrospective caused a team policy change,-- and later refines its understanding of why.-- EntitiesCREATE (:EntityNode { id:'e:retro-jan', label:'January Retrospective', kind:'event', layer:'instance', learned_at:timestamp('2025-01-20T00:00:00'), expired_at:NULL });CREATE (:EntityNode { id:'e:policy-change', label:'No-Meeting Fridays', kind:'event', layer:'instance', learned_at:timestamp('2025-01-25T00:00:00'), expired_at:NULL });CREATE (:EntityNode { id:'e:team-health', label:'Team Health', kind:'concept', layer:'domain', learned_at:timestamp('2025-01-01T00:00:00'), expired_at:NULL });CREATE (:EntityNode { id:'e:alice', label:'Alice', kind:'actor', layer:'instance', learned_at:timestamp('2025-01-01T00:00:00'), expired_at:NULL });CREATE (:EntityNode { id:'e:bob', label:'Bob', kind:'actor', layer:'instance', learned_at:timestamp('2025-01-01T00:00:00'), expired_at:NULL });-- Hyperedge: retro involved Alice, Bob, and team-health as a topicCREATE (:ContainsEdge { id:'ce:retro-members', layer:'instance', kind:'temporal', context:'team:engineering', probability:1.0, learned_at:timestamp('2025-01-20T00:00:00'), expired_at:NULL});MATCH (r:EntityNode {id:'e:retro-jan'}), (ce:ContainsEdge {id:'ce:retro-members'}) CREATE (r)-[:CONNECTS]->(ce);MATCH (ce:ContainsEdge {id:'ce:retro-members'}), (a:EntityNode {id:'e:alice'}) CREATE (ce)-[:BINDS]->(a);MATCH (ce:ContainsEdge {id:'ce:retro-members'}), (b:EntityNode {id:'e:bob'}) CREATE (ce)-[:BINDS]->(b);MATCH (ce:ContainsEdge {id:'ce:retro-members'}), (th:EntityNode {id:'e:team-health'})CREATE (ce)-[:BINDS]->(th);-- Causality: retro led to policy change (initial belief)CREATE (:LeadsToEdge { id:'le:retro-policy-v1', layer:'instance', kind:'causal', context:'team:engineering', learned_at:timestamp('2025-01-25T00:00:00'), expired_at:NULL});MATCH (r:EntityNode {id:'e:retro-jan'}), (l:LeadsToEdge {id:'le:retro-policy-v1'}) CREATE (r)-[:CONNECTS]->(l);MATCH (l:LeadsToEdge {id:'le:retro-policy-v1'}),(p:EntityNode {id:'e:policy-change'}) CREATE (l)-[:BINDS]->(p);-- Property: policy-change has property 'initiated-by'CREATE (:HasPropertyEdge { id:'pe:policy-initiator', layer:'instance', kind:'relational', property_name:'initiated-by', learned_at:timestamp('2025-01-25T00:00:00'), expired_at:NULL});MATCH (p:EntityNode {id:'e:policy-change'}), (hp:HasPropertyEdge {id:'pe:policy-initiator'}) CREATE (p)-[:CONNECTS]->(hp);MATCH (hp:HasPropertyEdge {id:'pe:policy-initiator'}), (a:EntityNode {id:'e:alice'}) CREATE (hp)-[:BINDS]->(a);-- Similarity: retro and team-health are semantically close in this contextCREATE (:SimilarEdge { id:'se:retro-health', layer:'domain', kind:'semantic', context:'team-dynamics', similarity:0.87, learned_at:timestamp('2025-01-25T00:00:00'), expired_at:NULL});MATCH (r:EntityNode {id:'e:retro-jan'}), (se:SimilarEdge {id:'se:retro-health'}) CREATE (r)-[:CONNECTS]->(se);MATCH (se:SimilarEdge {id:'se:retro-health'}),(th:EntityNode {id:'e:team-health'}) CREATE (se)-[:BINDS]->(th);-- ── Query: full scene reconstruction ──────────────────────MATCH (src:EntityNode)-[:CONNECTS]->(edge)-[:BINDS]->(tgt:EntityNode)WHERE edge.expired_at IS NULLRETURN src.label, labels(edge)[0] AS relation, edge.layer, edge.kind, tgt.labelORDER BY edge.learned_at;What This Gives an AI Agent
When you put all of this together, an AI agent backed by this schema gains something qualitatively different from a vector store or a simple triple graph.
It can ask “what do I know about this event?” — traversing not just to entities but to the edge-nodes themselves, reading their layer, kind, context, and timestamps as first-class knowledge.
It can ask “how confident am I, and based on what?” — the probability on ContainsEdge and similarity on SimilarEdge are not external annotations but structural properties of the relation itself.
It can ask “what caused what, and has my understanding changed?” — the temporal expiry chain on LeadsToEdge nodes gives a revisable history of causal beliefs, not a single frozen fact.
And it can ask “what cluster of knowledge is this part of?” — the layer and kind taxonomy lets it reason at the right level of abstraction for a given query, rather than flooding every retrieval with irrelevant instance-level noise.
This is what the bipartite semantic spacetime graph enables: not just retrieval but understanding — contextualized, layered, temporal, and causally structured. The mathematics for this has existed for decades. With LadybugDB and Cypher, so does the implementation.
If this resonates with your work on AI memory systems, the full mathematical treatment — including hypergraph encoding in SQLite when you have no other options — is covered in my Agentic Memory book series. The formal type-theoretic foundations are in Pocket Knowledge Graphs. Both are available on Leanpub.