So, you know that I’m working on the book about the memory, about the temporal context of the memory.

Temporal Aware AI memory: Why time is a key in a memory
So how do you make an AI agent and conversational agent understand time? How does time shape attention? How is time important for the context engine? You will learn how to add time to knowledge graphs, how time and causality drive context, and how to make the knowledge graphs that are used for AI memory time-aware.
https://leanpub.com/time-aware-ai-memory?source=post_page-----1c0d6f5a594f---------------------------------------

And as a result of this book, I built my ontology for the memory that is more friendly for the LLM in general. It’s some kind of bipartite graph, and I published this bipartite graph ontology as a property graph right now on GitHub separately.

agentic-memory/memory-ontology-ladybug-v3.md at main · Volland/agentic-memory
Contribute to Volland/agentic-memory development by creating an account on GitHub.
https://github.com/Volland/agentic-memory/blob/main/memory-ontology-ladybug-v3.md?source=post_page-----1c0d6f5a594f---------------------------------------

So, even if you do not want to buy the book, you could see the memory structure, maybe with some details. So, I shared the chapter that describes it.

The Challenge of Triples and Context

And one of the tricks how to make the graphs more friendlier to the LLM is that you just have the bigger labels that capture more context. It’s a common problem that when we make the statement and then we make the extraction of knowledge to the knowledge graph, if we limit ourselves in subject-object-predicate, then we have the challenge.

For example, if I say that Volodya and Alex like to drink tea and play chess, it’s a fact. But it’s already multiple entities. We have two persons, two objects, two actions. And then we could say that we could translate it to the set of the triples that will miss the broader context. We could say:

  • Volodya drink tea

  • Alex drink tea

  • Volodya play chess

  • Alex play chess

  • Volodya play chess with Alex

  • Volodya drink tea with Alex and all these things

So, somehow to translate this, the biggest statement to the triples, we lose some parts of the context. But we need to reconstruct this from the context, and then it’s look like we have a subgraph of these triples. So, you see quite a challenging task, and if we decompose these triples and compose them back, we could get something different or lose the context, or create bigger chunks of text that could confuse the LLM itself. Because LLM is not so far from people; it’s super easy to confuse people, then it will be confusing the LLM also.

The Solution: Placeholder Labels and Subgraphs

So, what’s to do with such kind of things? So, it means that we could step back from subject-object and predicate and make the label that has placeholders for the entities, but some kind that describes the subgraph of the complex fact or complex event.

So, it means that your label have the statement that:

  • Placeholder for Volodya

  • Placeholder for Alex

  • Drink [and] placeholder for tea

  • Play [and] placeholder for chess

And then you could say that actions are also available entities, and then play and drink will be also connected. And nobody are stopping us to create more meaningful relations inside. But we could make the node that describe this particular fact. And then create the edges of the subspace that say that this fact contain Volodya, Alex, this fact contain tea and chess, and it’s what actually happens inside. And as a bonus point, we could include play and drink, and then if we really want to give even richer context, to build the connection between the tea and drink and chess and play with also some kind of relations.

Conclusion

But how to make these graphs are more suitable for the LLM is just allow bigger labels that capture bigger and meaningful concepts and work more with the subgraph of the complex facts that have more meaning for LLM than subject, object, and predicate.

That’s why, in my memory, we have some kind of subgraph nodes that describe the facts and events. And events are also some form of fact that describes the fact that, you know, something is happening. So it’s more semantical difference.

And what’s more important that you could tell the complex stories, because you could say that the facts could contain another facts and could form some kind of storytelling that maybe one fact could clarify another fact or one fact could extend another fact.

And if you wants to learn more, just read my book. I go in a details about the ontology. I also focus deeply on the notion of time. Facts are not so deeply dependent on time. If you get the time dimension to the facts, probably they will be closer to the events. But for facts in my memory, I also have the temporal dimensions. So I could say that:

  • I learned this fact at that particular date

  • This fact is valid until this particular date

  • But it start to be valid from this particular date and so on

And I also have the regular system timestamps like “created” and “updated at” and all that. But actually we have the dimension validity and dimension of discovery when we learn about it. And actually it’s quite important for more complex temporal reasoning in a graph that also could be useful for the LLM.