The Problem Starts with Triples
My graph journey began as an AI agent memory guy. Like most people in this space, I started with what seemed like the obvious choice: directed graphs and triples — the classic subject-predicate-object model borrowed from RDF and the semantic web tradition. It’s clean, it’s mathematically well-understood, and it has decades of tooling behind it.
And then it failed us.
The triple is a beautiful abstraction, but it has a fundamental limitation: it can only express binary relationships. Every connection in the world is flattened into a pair. When you try to model the richness of human cognition — events involving multiple participants, contextual clusters of memory, hierarchical structures of meaning — the triple starts to break under the weight. You either lose information, or you end up with an explosion of artificial intermediate nodes that obscure rather than illuminate the underlying structure.
So I went deeper.
How Human Memory Actually Works
Before we talk about graph mathematics, it’s worth understanding what we’re actually trying to model. Human memory is not a flat key-value store. It is not even a simple network of associations.
Research in cognitive science and neuroscience reveals that human memory is fundamentally hierarchical, multi-layered, and metagraphic in nature.
At the lowest level, individual memories bind together sensory details — sounds, faces, smells, emotions — into episodic traces. But those traces don’t live in isolation. They belong to semantic packages: your memories of a particular person, a particular project, a particular period of your life. These packages cluster into higher-order structures — life chapters, identities, conceptual domains.
What this means structurally is that memory has:
Hierarchy. Memories nest inside categories, which nest inside broader schemas. A specific conversation lives inside a relationship, which lives inside a phase of your life.
Shared substructures. The same person can appear in dozens of different episodic memories. The same concept can be part of multiple semantic packages simultaneously. Memory is not a tree — it’s a graph where subgraphs overlap and recombine.
Edges as first-class citizens. The relationship between two memories is itself something you can remember, reflect on, and connect to other things. The fact that event A caused event B is a piece of knowledge — and that causal link might itself participate in higher-level reasoning.
Contextual grouping. A memory of a dinner involves multiple people, a location, a time, a mood, and various objects — all connected simultaneously, not pairwise. No sequence of binary edges faithfully captures this.
This is precisely what makes the triple model insufficient, and what drives us toward hypergraphs and metagraphs.
So to learn about agentic memory more
Hypergraphs: When One Edge Isn’t Enough
A standard graph edge connects exactly two nodes. A hyperedge connects any number of nodes simultaneously. This is the key innovation of the hypergraph.
Consider a memory of a team meeting. The participants were Alice, Bob, and Carol. The topic was the product launch. The feeling was tension. In a standard graph, you’d need to create an artificial “meeting” node and connect everyone to it with separate edges, losing the native grouping. In a hypergraph, you create a single hyperedge that directly spans all participants and attributes at once.
This makes hypergraphs particularly powerful for modeling events, relationships, and any phenomenon that is inherently multi-participant. Direction can still be introduced — a hyperedge can have source nodes and target nodes — making it possible to model directed multi-party interactions as well.
The mathematics of hypergraphs is well-developed. There are algorithms for clustering, traversal, and analysis that exploit the hyperedge structure. TypeDB, for example, expresses hypergraphic structures through dependent types and a purpose-built modeling language, giving us an effective way to describe and query hyperedges in practice.
But hypergraphs have their own ceiling, and we hit it quickly.
The Wall: You Can’t Reference a Hyperedge
Here’s the problem. Once you have a hyperedge connecting multiple people in a shared event — a great first step — you immediately want to do the next natural thing: reference that hyperedge. You want to say “this event caused that event,” or “this event is part of this project,” or “I remember this event with a certain emotional quality.”
In the mathematical definition of a hypergraph, you cannot do this. A hyperedge is a connection — it is not a node. You cannot make a hyperedge the subject or object of another relationship without stepping outside the formal definition of the hypergraph. And the moment you do that, all the algorithms and mathematical properties you were relying on break down.
This is not a tooling problem. It’s a structural one. The hypergraph simply doesn’t have the expressiveness we need.
Metagraphs: Graphs All the Way Down
The solution is the metagraph — a more powerful generalization where not only can edges connect multiple nodes, but the very concepts of node and edge become interchangeable and recursive.
In a metagraph:
Metanodes can contain entire subgraphs inside them. A node is not just an atom — it can be a whole world of internal structure.
Metaedges can connect other edges, not just nodes. Relationships can have relationships.
Edges can be used as nodes in other parts of the graph, enabling the kind of self-reference that human memory clearly requires.
This is the right model for cognition. The memory of a specific conversation (a subgraph of entities and relations) becomes a node in a higher-level graph of your relationship with that person, which is itself a node in a life-chapter graph. The causal link between two events can itself be connected to the causal link between two other events — meta-causality.
The expressive power is remarkable. Unfortunately, the practical reality is harsh.
The Metagraph Problem: We Have No Tools
Let me be direct: metagraphs are extremely hard to model, practically impossible to visualize, and — as of today — no mainstream database is capable of working with them natively.
We have partial solutions. TerminusDB with JSON-LD gets us into metagraph-adjacent territory with some constraints.
TypeDB’s type-theoretic approach handles hypergraph-like structures with some elegance.
And in my book on graph modeling in SQL and SQLite, I describe how to encode hyper and metagraph structures in relational databases — which works, but forces you to translate Cypher into SQL, a translation that can become nightmarishly complex.
We need metagraphs for proper memory modeling. We don’t have the databases. And building a metagraph database from scratch — while it would be a magnificent nerdy endeavor — is not a realistic path for most projects.
So what do we do?
Bipartite Graphs: The Elegant Compromise
The answer, it turns out, has been hiding in plain sight for decades: bipartite graphs.
A bipartite graph is a graph whose nodes can be divided into exactly two groups — let’s call them N-nodes (regular nodes) and E-nodes (edge-nodes) — such that every connection goes between the groups, never within them. An N-node connects only to E-nodes, and an E-node connects only to N-nodes. You never have N-to-N or E-to-E connections.
This might remind you of something. It’s structurally very similar to the triple: you always have the pattern N → E → N. But here’s the key insight: the E-nodes are nodes. They are first-class citizens in the graph, with all the properties and queryability that implies.
What does this give us? It gives us a clean, database-friendly way to represent hyperedges. Instead of a hyperedge connecting five nodes simultaneously, we create one E-node (representing the relationship) connected to all five N-nodes. The hyperedge has become a node — and now we can reference it, attach properties to it, connect it to other things.
Press enter or click to view image in full size
hyper graph
The RDF community knows this technique as reification — lifting a triple into a node so it can be talked about. The bipartite formulation makes this the native structure rather than an awkward afterthought.
Press enter or click to view image in full size
biparty translation of hypergraph
In a bipartite graph, we have two kinds of nodes and only one kind of relationship. We don’t even need typed edges because the structure encodes itself: the type of any node is deducible from what it connects to, and the arity of an E-node tells you whether it’s a simple binary edge or a complex hyperedge.
Multipartite Graphs: Extending the Model
The bipartite structure solves the hyperedge problem, but we still need one more step to model metagraphs: we need to be able to reference the bipartite hyperedges themselves as nodes in a higher-level structure.
This is where multipartite graphs come in. A multipartite graph generalizes the bipartite idea to multiple groups of nodes — Group 1, Group 2, Group 3, and so on. The rule is that nodes within the same group cannot connect directly; connections always cross group boundaries.
In our framework, this looks like:
Layer 1 (N-nodes): The primitive entities — people, concepts, objects, moments.
Layer 2 (E-nodes): Hyperedges from Layer 1, now promoted to nodes — events, relationships, actions.
Layer 3 (S-nodes): Subgraph-nodes that group and reference Layer 2 elements — contexts, narratives, meta-relations.
Press enter or click to view image in full size
And nothing stops us from going further. The metagraph’s recursive depth becomes expressible as additional layers in the multipartite structure.
This is the tripartite extension: introducing a third type of node — call it an S-node (for “subgraph”) — that can connect hyperedge-nodes and regular nodes together into meta-structures. When we introduce the ability to create meta-edges and meta-nodes at this level, we can then connect meta-edges with meta-nodes to assemble nodes that represent entire meta-subgraphs.
Layered Graphs: The Practical Implementation
Let me tie this together with the concept of layered graphs, which is what we actually implement in practice.
The idea is straightforward: each layer of the graph can reference the edges of the layer below as its own nodes. Layer 1 is your standard property graph of entities and relationships. Layer 2 takes those relationships and treats them as nodes, creating meta-relationships between relationships. Layer 3 does the same to Layer 2. And so on.
The beautiful practical consequence: all of this can be queried with standard Cypher against any property graph database. Ladybug, Neo4j, Kuzu — these tools all support property graphs with typed nodes and edges. By encoding layer membership as node and edge properties, we can traverse the full metagraphic structure using queries the database already understands.
We don’t need a metagraph database. We build metagraph behavior into a bipartite multipartite layered graph sitting on top of a Cypher-capable engine.
The key encoding rules are:
Each node has a layer property and a role property (N, E, or S type). Edges within the bipartite/multipartite structure follow the strict cross-group rule. Higher-layer S-nodes reference lower-layer E-nodes by connecting to their node representations. Properties on edge-nodes carry everything that would traditionally be an edge property, plus metadata about the original relationship structure.
What We Gain
With this framework — bipartite and multipartite layered graphs on a standard property graph database — we gain something remarkable for AI memory modeling.
We can represent episodic memories as hyperedge-nodes: events that bind multiple participants, objects, and contexts simultaneously. We can reference those events as first-class nodes in semantic memory structures. We can model the causal links between events and then connect those causal links to other causal links, building the kind of meta-causal reasoning that human reflection involves. We can cluster subgraphs into higher-level nodes that represent life chapters, projects, or conceptual domains.
All of it queryable. All of it compatible with existing embedded graph databases — many of them forks of the excellent Kuzu project like ladybug. All of it expressible in Cypher without any translation to SQL.
The triple was a starting point. The hypergraph was a necessary evolution. The metagraph is the destination. And the bipartite-multipartite layered graph is the road that gets us there with tools we actually have today.
What’s Next
This framework opens several directions I’m actively exploring. How do we efficiently encode temporal context across layers — so that the same entity can have different properties in different time-indexed subgraphs? How do we handle the intersection of formal type systems (dependent types, Agda-style specifications) with the metagraphic structures described here? How do edge-device constraints change the storage and query strategies for this kind of graph?
If these questions resonate with you, I’ve written about the formal mathematical side of this extensively, including how to model hypergraphs and metagraphs in SQLite when you have no other option. And the Ladybug graph database — which I believe deserves its own detailed article — is particularly interesting in the context of the layered approach.
The goal, ultimately, is an AI memory that doesn’t just retrieve by similarity but actually understands context, causality, and the hierarchical nesting of meaning. The mathematics for that has existed for decades. Now we just need to put it to work.
To learn more about Context Graphs and agentic memory get my bundle