The cognitive architecture of AI agents demands a fundamental rethinking of how we represent knowledge. While the database community has spent decades optimizing properties away — hiding them inside nodes for compactness and query performance — neuroscience and practical agent systems tell a different story: properties aren’t just metadata to be tucked away. They’re primary perceptual primitives that deserve first-class citizenship in our knowledge graphs.
The Property Paradox in Labeled Property Graphs
The labeled property graph (LPG) model treats properties as convenient annotations — key-value pairs attached to nodes and edges. This design choice emerged from pragmatic engineering concerns: properties-as-nodes make graphs visually cluttered, complicate traversal queries, and increase storage overhead. By embedding properties directly into graph elements, we achieve cleaner visualizations and simpler Cypher queries.
But this optimization trades away something profound. When properties disappear into nodes, we lose the ability to reason about them as independent entities. We can’t easily ask: “Which objects share this particular configuration of features?” or “How do different property combinations predict behavior?” The graph becomes a collection of opaque labeled boxes rather than a compositional space of features and relationships.
This matters especially for agentic memory systems, where the goal isn’t just storage and retrieval but reconstruction and reasoning. An AI agent doesn’t simply look up “apple” — it synthesizes understanding from sensory features, contextual cues, and task requirements.
How Brains Actually Recognize Objects
Neuroscience reveals that object recognition proceeds bottom-up from feature detection, not top-down from category templates. When you see an apple, your visual cortex doesn’t match against a stored “apple prototype.” Instead:
Early visual areas detect edges, colors, textures — raw property signals
Mid-level areas combine these into shape fragments and surface properties
Higher areas integrate features into object hypotheses constrained by context and task
The “object” emerges from property composition. You recognize it as an apple because this particular constellation of features — red/green surface, round shape, ~3-inch diameter, stem — matches the property configuration your experience associates with apples.
Critically, this process is compositional and context-dependent. The same visual features might be recognized as “toy apple” in a playroom, “decorative apple” in a still-life painting, or “rotten apple” if textural properties shift. The properties themselves remain constant; what changes is how they’re weighted and combined for the task at hand.
Duck Typing for Knowledge Graphs
This feature-based recognition maps directly onto programming concepts we already understand. “If it quacks like a duck, swims like a duck, and walks like a duck, it’s probably a duck” isn’t folk wisdom — it’s structural typing based on interface compliance.
In knowledge graphs, property-as-nodes enables:
Polymorphic reasoning: Query for “all entities with properties {liquid, transparent, drinkable}” rather than requiring explicit water/juice/tea categories
Interface-based clustering: Group entities by shared property configurations, discovering implicit types that weren’t explicitly modeled
Compositional queries: “Find entities similar to X but lacking property Y” — something nearly impossible when properties are hidden
Dynamic classification: As new properties are observed, entities can fluidly move between classifications without schema migration
This mirrors object-oriented programming’s emphasis on interfaces over implementation — but applied to knowledge representation. The property set defines the type, not a rigid class hierarchy.
Semantic Spacetime and Property Relations
The semantic spacetime framework I’ve explored reduces arbitrary information to four fundamental relations. One of these is property assignment — not as a second-class annotation but as a primary relationship type alongside causation, composition, and transformation.
When properties become explicit nodes, they can:
Form property networks: “temperature” relates to “thermal energy” relates to “molecular motion” — creating a conceptual substrate beneath object-level descriptions
Participate in meta-relations: Properties can have properties (uncertainty, measurement precision, temporal validity), enabling probabilistic and temporal reasoning
Support abstraction hierarchies: Specific properties (RGB #FF0000) can inherit from general properties (red) can inherit from abstract properties (color)
Enable analogical reasoning: Structural similarity between property configurations suggests functional similarity — the basis for case-based reasoning and transfer learning
This isn’t just theoretical elegance. For agentic memory systems that need to reconstruct knowledge rather than merely retrieve it, property networks provide the compositional substrate for synthesis.
Richer Recall Through Feature-Based Retrieval
Current vector-based RAG systems embed entire documents or chunks as single points in semantic space. Retrieval becomes nearest-neighbor search: “Find chunks similar to the query embedding.”
But human recall doesn’t work this way. You don’t search for “documents about apples.” You search for:
“That crisp thing I ate yesterday” (texture + temporal properties)
“The red fruit on the table” (color + location properties)
“Something sweet to pair with cheese” (taste + complementarity properties)
The relevant features shift based on task context. A property-explicit knowledge graph enables:
Context-sensitive retrieval: Weight property matches differently for different queries — color matters more for visual recognition tasks, flavor for culinary tasks
Partial matching: Find entities sharing some but not all properties — critical for analogical reasoning
Contrastive queries: “Like X but not Y” — requiring explicit property representation to compute
Explanatory traces: “I retrieved this because properties P1, P2, P3 matched your query context”
This transforms retrieval from opaque similarity scoring to transparent feature-based reasoning — essential for agents that need to explain their decisions.
The Path Forward: Property-First Architectures
The question isn’t whether to represent properties explicitly — it’s how to do so efficiently at scale. Some directions:
Hybrid storage: Critical properties as explicit nodes for reasoning; non-critical properties embedded for performance
Property indexing: Specialized indices for property-based queries (feature vector indices, property graph indices)
Lazy materialization: Properties stored compactly but expanded into graph form when reasoning requires it
Temporal properties: Representing how properties change over time — essential for causal reasoning and prediction
The cognitive science is clear: brains organize knowledge around features, not categories. The computational theory is clear: compositional representations generalize better than monolithic ones. The practical experience is accumulating: agents that reason about properties outperform those that hide them.
Properties aren’t just database optimization targets. They’re the perceptual atoms from which understanding is constructed. In agentic memory systems designed for reasoning rather than mere retrieval, properties deserve their place in the spotlight — not hidden in the shadows of nodes, but standing as first-class citizens in their own right.
The duck’s quack, waddle, and swim aren’t secondary attributes of “duckness.” They constitute what it means to be a duck. Our knowledge graphs should reflect this reality.