You built your agent memory in LadybugDB. Typed node tables, polymorphic relationships, bipartite edge nodes — the whole Semantic Spacetime apparatus running on LadybugDB embedded engine. It works. It’s fast. It fits on your laptop.

Then someone asks:

“Can we export this to a knowledge graph? We use SPARQL.”

This is not a betrayal of the property graph faith. It’s interoperability. And the good news is that LadybugDB’s bipartite architecture — where edges are first-class nodes mediated by polymorphic FROM_LINK / TO_LINK rel tables — maps to RDF more naturally than you'd expect. In fact, the bipartite pattern is reification. You've been writing triples all along.

This article walks through the translation, node by node, rel by rel, from a working LadybugDB Cypher schema to Turtle (RDF’s most readable serialization). Every code block is real. Every mapping decision is explained.

The Source Schema: Bipartite Semantic Spacetime

Let’s start with a minimal but complete LadybugDB schema. This is the core of what we’ll translate — a bipartite graph with four Semantic Spacetime edge types.

-- Entity nodes: the "nouns" of the knowledge graphCREATE NODE TABLE EntityNode (    id         STRING,    label      STRING,    kind       STRING,     -- "concept" | "actor" | "event" | "place"    layer      STRING,     -- "core" | "domain" | "instance"    learned_at TIMESTAMP,    expired_at TIMESTAMP,  -- NULL = still valid    PRIMARY KEY (id));
-- Edge node: SIMILAR (γ₁ — near/similar)CREATE NODE TABLE SimilarEdge (    id         STRING,    layer      STRING,    kind       STRING,     -- "semantic" | "structural" | "functional"    context    STRING,     -- "embedding_v3" | "domain:medicine"    similarity DOUBLE,     -- [0.0 … 1.0]    learned_at TIMESTAMP,    expired_at TIMESTAMP,    PRIMARY KEY (id));-- Edge node: CONTAINS (γ₂ — containment/spatial)CREATE NODE TABLE ContainsEdge (    id          STRING,    layer       STRING,    kind        STRING,    -- "part_of" | "member_of" | "scope"    context     STRING,    probability DOUBLE,    learned_at  TIMESTAMP,    expired_at  TIMESTAMP,    PRIMARY KEY (id));-- Edge node: HAS_PROPERTY (γ₃ — expresses property)CREATE NODE TABLE HasPropertyEdge (    id            STRING,    layer         STRING,    kind          STRING,    property_name STRING,    learned_at    TIMESTAMP,    expired_at    TIMESTAMP,    PRIMARY KEY (id));-- Edge node: LEADS_TO (γ₄ — causality/succession)CREATE NODE TABLE LeadsToEdge (    id         STRING,    layer      STRING,    kind       STRING,    -- "causal" | "temporal" | "logical"    context    STRING,    learned_at TIMESTAMP,    expired_at TIMESTAMP,    PRIMARY KEY (id));-- TWO polymorphic rel tables enforce the bipartite ruleCREATE REL TABLE FROM_LINK (    FROM EntityNode    TO   SimilarEdge | ContainsEdge | HasPropertyEdge | LeadsToEdge);CREATE REL TABLE TO_LINK (    FROM SimilarEdge | ContainsEdge | HasPropertyEdge | LeadsToEdge    TO   EntityNode);

This is the schema we’ll translate. Two entity nodes connected through a reified edge node, mediated by exactly two rel tables. The bipartite constraint means: entities never touch entities; edge nodes never touch edge nodes.

Step 0: Define Your RDF Namespace

Before writing any triples, establish the vocabulary. In RDF, every concept needs a URI. In LadybugDB, concepts are table names and column names. The mapping is direct.

@prefix ldb:   <http://ladybugdb.org/ontology#> .@prefix sst:   <http://ladybugdb.org/sst#> .@prefix xsd:   <http://www.w3.org/2001/XMLSchema#> .@prefix rdf:   <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .@prefix rdfs:  <http://www.w3.org/2000/01/rdf-schema#> .@prefix owl:   <http://www.w3.org/2002/07/owl#> .

Two custom prefixes:

  • ldb: — LadybugDB ontology terms (node types, properties, structural predicates)

  • sst: — Semantic Spacetime terms (the four edge types, layers, kinds)

Everything else maps to standard RDF/OWL vocabulary.

Step 1: Translate Node Tables to RDF Classes

Each CREATE NODE TABLE becomes an rdfs:Class (or owl:Class if you want reasoning). Each column becomes an rdf:Property with a domain and range.

ldb:EntityNode  a  owl:Class ;    rdfs:label  "Entity Node" ;    rdfs:comment "The nouns of the knowledge graph — concepts, actors, events, places." .ldb:SimilarEdge  a  owl:Class ;    rdfs:label  "Similar Edge Node" ;    rdfs:comment "Reified γ₁ (near/similar) relation from Semantic Spacetime." .ldb:ContainsEdge  a  owl:Class ;    rdfs:label  "Contains Edge Node" ;    rdfs:comment "Reified γ₂ (containment/spatial) relation." .ldb:HasPropertyEdge  a  owl:Class ;    rdfs:label  "Has Property Edge Node" ;    rdfs:comment "Reified γ₃ (expresses property) relation." .ldb:LeadsToEdge  a  owl:Class ;    rdfs:label  "Leads To Edge Node" ;    rdfs:comment "Reified γ₄ (causality/succession) relation." .# --- Group edge node classes under a common superclass ---sst:EdgeNode  a  owl:Class ;    rdfs:label "SST Edge Node" ;    rdfs:comment "Abstract superclass for all reified Semantic Spacetime relations." .ldb:SimilarEdge   rdfs:subClassOf  sst:EdgeNode .ldb:ContainsEdge  rdfs:subClassOf  sst:EdgeNode .ldb:HasPropertyEdge rdfs:subClassOf sst:EdgeNode .ldb:LeadsToEdge   rdfs:subClassOf  sst:EdgeNode .

Notice something? The rdfs:subClassOf hierarchy encodes what LadybugDB expresses through table naming conventions. In the property graph, all four edge node tables share the same structural role — they sit between two FROM_LINK/TO_LINK hops. In RDF, we make that shared role explicit with sst:EdgeNode.

Step 2: Translate Column Definitions to RDF Properties

Each column in a node table becomes a property declaration. The PRIMARY KEY column maps to the node's URI itself — it doesn't need a separate property.

# --- Properties shared across all nodes ---
ldb:layer  a  owl:DatatypeProperty ;    rdfs:domain  [ owl:unionOf (ldb:EntityNode sst:EdgeNode) ] ;    rdfs:range   xsd:string ;    rdfs:comment "Abstraction layer: core, domain, instance, meta." .ldb:kind  a  owl:DatatypeProperty ;    rdfs:domain  [ owl:unionOf (ldb:EntityNode sst:EdgeNode) ] ;    rdfs:range   xsd:string ;    rdfs:comment "Semantic subtype within a layer." .ldb:learnedAt  a  owl:DatatypeProperty ;    rdfs:range   xsd:dateTime ;    rdfs:comment "Timestamp when this knowledge was acquired." .ldb:expiredAt  a  owl:DatatypeProperty ;    rdfs:range   xsd:dateTime ;    rdfs:comment "Timestamp when this knowledge became invalid. Absent = still valid." .# --- EntityNode-specific ---ldb:label  a  owl:DatatypeProperty ;    rdfs:domain  ldb:EntityNode ;    rdfs:range   xsd:string .# --- SimilarEdge-specific ---sst:context  a  owl:DatatypeProperty ;    rdfs:domain  sst:EdgeNode ;    rdfs:range   xsd:string .sst:similarity  a  owl:DatatypeProperty ;    rdfs:domain  ldb:SimilarEdge ;    rdfs:range   xsd:double .# --- ContainsEdge-specific ---sst:probability  a  owl:DatatypeProperty ;    rdfs:domain  ldb:ContainsEdge ;    rdfs:range   xsd:double .# --- HasPropertyEdge-specific ---sst:propertyName  a  owl:DatatypeProperty ;    rdfs:domain  ldb:HasPropertyEdge ;    rdfs:range   xsd:string .

The key decisionid (the primary key) becomes the URI suffix, not a separate property. In LadybugDB, id is the identity. In RDF, identity is the URI. So an entity with id = "concept_42" becomes <http://ladybugdb.org/data/concept_42>. No redundancy.

Step 3: Translate Polymorphic Rel Tables to RDF Object Properties

This is where it gets interesting. The two polymorphic rel tables (FROM_LINKTO_LINK) encode the bipartite structure. In RDF, they become object properties linking entity nodes to edge nodes.

# --- Structural predicates (the bipartite backbone) ---
ldb:fromLink  a  owl:ObjectProperty ;    rdfs:domain  ldb:EntityNode ;    rdfs:range   sst:EdgeNode ;    rdfs:comment "Connects a source entity to a reified edge node (FROM_LINK)." .ldb:toLink  a  owl:ObjectProperty ;    rdfs:domain  sst:EdgeNode ;    rdfs:range   ldb:EntityNode ;    rdfs:comment "Connects a reified edge node to a target entity (TO_LINK)." .

Two properties. That’s it. The bipartite rule from LadybugDB — “entities connect only to edge nodes, edge nodes connect only to entities” — is now encoded in rdfs:domain and rdfs:range constraints.

Step 4: Translate Instance Data

Now let’s populate. Suppose you have two concepts connected by a similarity relation in LadybugDB:

-- Create entitiesCREATE (a:EntityNode {    id: 'tcp_ip',    label: 'TCP/IP Protocol',    kind: 'concept',    layer: 'domain',    learned_at: timestamp('2024-01-15')});
CREATE (b:EntityNode {    id: 'http_protocol',    label: 'HTTP Protocol',    kind: 'concept',    layer: 'domain',    learned_at: timestamp('2024-01-15')});-- Create the reified edge nodeCREATE (s:SimilarEdge {    id: 'sim_tcp_http_001',    layer: 'domain',    kind: 'structural',    context: 'network_stack',    similarity: 0.78,    learned_at: timestamp('2024-01-15')});-- Wire it up through the bipartite backboneMATCH (a:EntityNode {id: 'tcp_ip'}), (s:SimilarEdge {id: 'sim_tcp_http_001'})CREATE (a)-[:FROM_LINK]->(s);MATCH (s:SimilarEdge {id: 'sim_tcp_http_001'}), (b:EntityNode {id: 'http_protocol'})CREATE (s)-[:TO_LINK]->(b);

Here’s the same data in Turtle:

@prefix ldb:  <http://ladybugdb.org/ontology#> .@prefix sst:  <http://ladybugdb.org/sst#> .@prefix data: <http://ladybugdb.org/data/> .@prefix xsd:  <http://www.w3.org/2001/XMLSchema#> .
# --- Entity nodes ---data:tcp_ip  a  ldb:EntityNode ;    ldb:label     "TCP/IP Protocol" ;    ldb:kind      "concept" ;    ldb:layer     "domain" ;    ldb:learnedAt "2024-01-15T00:00:00"^^xsd:dateTime .data:http_protocol  a  ldb:EntityNode ;    ldb:label     "HTTP Protocol" ;    ldb:kind      "concept" ;    ldb:layer     "domain" ;    ldb:learnedAt "2024-01-15T00:00:00"^^xsd:dateTime .# --- Reified edge node ---data:sim_tcp_http_001  a  ldb:SimilarEdge ;    ldb:layer       "domain" ;    ldb:kind        "structural" ;    sst:context     "network_stack" ;    sst:similarity  "0.78"^^xsd:double ;    ldb:learnedAt   "2024-01-15T00:00:00"^^xsd:dateTime .# --- Bipartite wiring ---data:tcp_ip          ldb:fromLink  data:sim_tcp_http_001 .data:sim_tcp_http_001  ldb:toLink  data:http_protocol .

Read it aloud: “TCP/IP links-from a similarity edge node, which links-to HTTP.” The bipartite hop is visible. The edge node carries all the metadata — layer, kind, context, similarity score. No information is lost.

Step 5: The RDF-star Shortcut (and Why You Might Skip It)

RDF 1.2 introduces triple terms (the successor to RDF-star’s quoted triples), which let you annotate edges without full reification. The syntax uses << >> delimiters:

@prefix sst: <http://ladybugdb.org/sst#> .@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
# RDF 1.2 triple term — compact but lossy<< data:tcp_ip  sst:similarTo  data:http_protocol >>    sst:similarity  "0.78"^^xsd:double ;    sst:context     "network_stack" ;    ldb:layer       "domain" ;    ldb:kind        "structural" ;    ldb:learnedAt   "2024-01-15T00:00:00"^^xsd:dateTime .

This is prettier. But there are three reasons your bipartite schema should probably stick with explicit reification:

1. Identity. RDF-star triple terms don’t have their own URI. Your SimilarEdge node with id: 'sim_tcp_http_001' is a first-class citizen — you can link to it, query it, expire it independently. A triple term is anonymous. In agent memory architectures, addressable edges matter — you need to update a similarity score when embeddings change, without deleting and recreating the triple.

2. Multi-hop queries. In LadybugDB, you write MATCH (a)-[:FROM_LINK]->(e:SimilarEdge)-[:TO_LINK]->(b) WHERE e.similarity > 0.7 RETURN a, b, e.context. In SPARQL over explicit reification, this translates directly. With triple terms, filtering edge metadata requires nested patterns that are harder to compose.

3. Semantic Spacetime typing. Your edge nodes are typed — SimilarEdgeContainsEdgeHasPropertyEdgeLeadsToEdge. In RDF-star, the type of the annotation target is the triple itself, not a domain-specific class. You lose the rdf:type signal that makes SPARQL queries like "find all containment relations in the meta layer" trivial.

The bipartite reification pattern and RDF-star serve different consumers. Use explicit reification as your canonical export; offer RDF-star as a convenience view for downstream tools that expect it.

Step 6: Translate SPARQL ↔ Cypher Queries

Now let’s make sure you can actually query the exported data. Here are the most common patterns, side by side.

Find all entities similar to a given concept

Cypher (LadybugDB):

MATCH (a:EntityNode {id: 'tcp_ip'})-[:FROM_LINK]->(e:SimilarEdge)-[:TO_LINK]->(b:EntityNode)WHERE e.similarity > 0.7RETURN b.label, e.similarity, e.contextORDER BY e.similarity DESC;

SPARQL (over exported RDF):

PREFIX ldb:  <http://ladybugdb.org/ontology#>PREFIX sst:  <http://ladybugdb.org/sst#>PREFIX data: <http://ladybugdb.org/data/>
SELECT ?label ?sim ?ctx WHERE {    data:tcp_ip     ldb:fromLink  ?edge .    ?edge           a             ldb:SimilarEdge ;                    ldb:toLink    ?target ;                    sst:similarity ?sim ;                    sst:context    ?ctx .    ?target         ldb:label     ?label .    FILTER (?sim > 0.7)}ORDER BY DESC(?sim)

The structure is almost identical. FROM_LINK → ldb:fromLinkTO_LINK → ldb:toLink. Filter on edge node properties translates directly.

Find all containment paths (metagraph traversal)

Cypher:

MATCH path = (root:EntityNode {id: 'system_arch'})    (-[:FROM_LINK]->(e:ContainsEdge)-[:TO_LINK]->)+ (leaf:EntityNode)WHERE leaf.kind = 'component'RETURN [n IN nodes(path) | n.label] AS hierarchy;

SPARQL (with property paths):

PREFIX ldb: <http://ladybugdb.org/ontology#>PREFIX data: <http://ladybugdb.org/data/>
SELECT ?leaf ?leafLabel WHERE {    data:system_arch (ldb:fromLink/ldb:toLink)+ ?leaf .    ?leaf  a  ldb:EntityNode ;           ldb:kind  "component" ;           ldb:label ?leafLabel .}

Wait — this SPARQL property path skips the edge nodes. It jumps directly from entity to entity through fromLink/toLink composition. That's correct for reachability queries, but it loses the intermediate ContainsEdge metadata (layer, probability). For full traversal with edge data, you need recursive CTEs or unrolled patterns:

# Explicit two-hop pattern for accessing edge node propertiesSELECT ?parent ?child ?prob ?layer WHERE {    ?parent  ldb:fromLink  ?edge .    ?edge    a              ldb:ContainsEdge ;             ldb:toLink     ?child ;             sst:probability ?prob ;             ldb:layer       ?layer .}

This is the fundamental trade-off: property paths give you transitive closure; explicit patterns give you edge metadata. LadybugDB’s Cypher gives you both in a single MATCH because Kùzu's variable-length paths can filter on intermediate nodes. SPARQL makes you choose.

Temporal queries: find knowledge valid at a point in time

Cypher:

MATCH (a:EntityNode)-[:FROM_LINK]->(e)-[:TO_LINK]->(b:EntityNode)WHERE e.learned_at <= timestamp('2024-06-01')  AND (e.expired_at IS NULL OR e.expired_at > timestamp('2024-06-01'))RETURN a.label, labels(e)[0] AS relation_type, b.label;

SPARQL:

PREFIX ldb: <http://ladybugdb.org/ontology#>PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
SELECT ?aLabel ?edgeType ?bLabel WHERE {    ?a     ldb:fromLink  ?edge .    ?edge  ldb:toLink    ?b ;           rdf:type      ?edgeType ;           ldb:learnedAt ?learned .    ?a     ldb:label     ?aLabel .    ?b     ldb:label     ?bLabel .    FILTER (?edgeType != owl:NamedIndividual)    FILTER (?learned <= "2024-06-01T00:00:00"^^xsd:dateTime)    FILTER NOT EXISTS {        ?edge ldb:expiredAt ?expired .        FILTER (?expired <= "2024-06-01T00:00:00"^^xsd:dateTime)    }}

The FILTER NOT EXISTS pattern handles the "NULL means still valid" convention from LadybugDB. If expiredAt is absent (never set), the NOT EXISTS block succeeds — the edge is included. If expiredAt exists but is in the future relative to the query date, same result.

Step 7: Write the Translator

Here’s a minimal Python script that reads a LadybugDB (Kùzu) database and emits Turtle. This is not production code — it’s a starting point you can adapt.

import kuzufrom datetime import datetime
def to_turtle(db_path: str, base_uri: str = "http://ladybugdb.org/data/") -> str:    """Export a LadybugDB bipartite graph to Turtle format."""    db = kuzu.Database(db_path)    conn = kuzu.Connection(db)    lines = []    lines.append(f"@prefix ldb:  <http://ladybugdb.org/ontology#> .")    lines.append(f"@prefix sst:  <http://ladybugdb.org/sst#> .")    lines.append(f"@prefix data: <{base_uri}> .")    lines.append(f"@prefix xsd:  <http://www.w3.org/2001/XMLSchema#> .")    lines.append(f"@prefix rdf:  <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .")    lines.append("")    # --- Map node table names to RDF classes ---    table_class_map = {        "EntityNode":     "ldb:EntityNode",        "SimilarEdge":    "ldb:SimilarEdge",        "ContainsEdge":   "ldb:ContainsEdge",        "HasPropertyEdge":"ldb:HasPropertyEdge",        "LeadsToEdge":    "ldb:LeadsToEdge",    }    # --- Property name mapping (Cypher column → Turtle predicate) ---    prop_map = {        "label":         "ldb:label",        "kind":          "ldb:kind",        "layer":         "ldb:layer",        "learned_at":    "ldb:learnedAt",        "expired_at":    "ldb:expiredAt",        "context":       "sst:context",        "similarity":    "sst:similarity",        "probability":   "sst:probability",        "property_name": "sst:propertyName",    }    # --- Export nodes ---    for table_name, rdf_class in table_class_map.items():        try:            result = conn.execute(f"MATCH (n:{table_name}) RETURN n")            while result.has_next():                row = result.get_next()                node = row[0]                node_id = node["id"]                uri = f"data:{node_id}"                props = [f"    a  {rdf_class}"]                for col_name, predicate in prop_map.items():                    if col_name in node and node[col_name] is not None:                        val = node[col_name]                        if isinstance(val, datetime):                            props.append(                                f'    {predicate}  "{val.isoformat()}"^^xsd:dateTime'                            )                        elif isinstance(val, float):                            props.append(                                f'    {predicate}  "{val}"^^xsd:double'                            )                        else:                            escaped = str(val).replace('"', '\\"')                            props.append(f'    {predicate}  "{escaped}"')                lines.append(f"{uri}")                lines.append(" ;\n".join(props) + " .")                lines.append("")        except Exception:            pass  # Table doesn't exist in this database    # --- Export FROM_LINK relationships ---    try:        result = conn.execute(            "MATCH (a)-[r:FROM_LINK]->(e) RETURN a.id, e.id"        )        while result.has_next():            row = result.get_next()            lines.append(f"data:{row[0]}  ldb:fromLink  data:{row[1]} .")    except Exception:        pass    # --- Export TO_LINK relationships ---    try:        result = conn.execute(            "MATCH (e)-[r:TO_LINK]->(b) RETURN e.id, b.id"        )        while result.has_next():            row = result.get_next()            lines.append(f"data:{row[0]}  ldb:toLink  data:{row[1]} .")    except Exception:        pass    return "\n".join(lines)if __name__ == "__main__":    import sys    db_path = sys.argv[1] if len(sys.argv) > 1 else "./ladybug_db"    print(to_turtle(db_path))

Run it:

python ladybug_to_turtle.py ./my_memory_graph > export.ttl

Load the result into any triple store — Apache Jena, Oxigraph, Stardog, GraphDB — and query with SPARQL.

The Translation Cheat Sheet

Here’s the complete mapping for quick reference.

Schema-level translations:

CREATE NODE TABLE X (...) → ldb:X a owl:Class — each table becomes a class.

id STRING PRIMARY KEY → the URI suffix data:{id} — identity is the URI, no separate property needed.

label STRING → ldb:label "..." — string columns become datatype properties.

learned_at TIMESTAMP → ldb:learnedAt "..."^^xsd:dateTime — temporal provenance with XSD typing.

similarity DOUBLE → sst:similarity "0.78"^^xsd:double — numeric columns become typed literals.

Structural translations:

CREATE REL TABLE FROM_LINK (FROM A TO B|C|D) → ldb:fromLink a owl:ObjectProperty — polymorphism maps to union domain/range.

CREATE REL TABLE TO_LINK (FROM B|C|D TO A) → ldb:toLink a owl:ObjectProperty — same pattern, reversed direction.

(a)-[:FROM_LINK]->(e) → data:a ldb:fromLink data:e — a simple triple assertion.

(e)-[:TO_LINK]->(b) → data:e ldb:toLink data:b — same, other side of the bipartite hop.

Query-level translations:

WHERE e.expired_at IS NULL → FILTER NOT EXISTS { ?e ldb:expiredAt ?x } — open-world absence handling.

MATCH path = (a)(-[:FROM_LINK]->(e)-[:TO_LINK]->)+(b) → ?a (ldb:fromLink/ldb:toLink)+ ?b — property path composition, but note: this loses edge node data.

What You Gain, What You Lose

You gain interoperability with the entire Semantic Web stack — SPARQL federation, OWL reasoning, SHACL validation, linked data publication, and integration with knowledge graphs that speak RDF natively. If a collaborator uses Protégé, Wikidata, or any SPARQL endpoint, your LadybugDB memory graph becomes queryable without re-implementing the whole pipeline.

You gain a formal ontology. The OWL class hierarchy — SimilarEdge rdfs:subClassOf sst:EdgeNode — enables reasoning that LadybugDB's schema enforcement doesn't provide. An OWL reasoner can infer that any property of sst:EdgeNode applies to all four edge types without writing explicit queries.

You lose write performance. LadybugDB writes edge-node triples in one CREATE statement backed by columnar storage. A triple store turns that into 8–12 individual triples (class assertion + properties + two link assertions) indexed across multiple B-trees.

You lose the bipartite guarantee at write time. In LadybugDB, Kùzu’s rel table constraints physically prevent EntityNode → EntityNode connections. In RDF, rdfs:domain and rdfs:range are descriptive — they inform reasoners but don't block invalid writes. You'd need SHACL shapes to enforce the bipartite constraint:

@prefix sh: <http://www.w3.org/ns/shacl#> .
ldb:BipartiteFromShape  a  sh:NodeShape ;    sh:targetSubjectsOf  ldb:fromLink ;    sh:class  ldb:EntityNode .ldb:BipartiteToShape  a  sh:NodeShape ;    sh:targetObjectsOf  ldb:toLink ;    sh:class  ldb:EntityNode .

You lose nothing conceptually. The bipartite pattern, Semantic Spacetime’s four edge types, temporal validity, layered clustering — all of it roundtrips cleanly. The translation is lossless at the data level. The differences are in enforcement style and query ergonomics.

When to Export, When to Stay

Export to RDF when:

  • You need to publish your knowledge graph as linked data

  • Downstream tools expect SPARQL (most enterprise knowledge management platforms do)

  • You’re federating with external knowledge graphs (Wikidata, DBpedia, domain ontologies)

  • You want OWL reasoning or SHACL validation on your schema

Stay in LadybugDB when:

  • Your graph is an agent’s local memory (embedded, fast, single-writer)

  • You need sub-millisecond traversal for real-time decision-making

  • The bipartite constraint is safety-critical and must be enforced at write time

  • You’re running on an edge device where a triple store won’t fit

The two are not mutually exclusive. LadybugDB is the engine; RDF is the export format. Build your agent memory in the property graph. Publish it to the knowledge graph when the world needs to see it.

This article is a companion to

LadybugDB for Edge Agent AI memory
Seasoned Developer's Journey from COBOL to Web 3.0, SSI, Privacy First Edge AI, and Beyond
https://leanpub.com/ladybugdb?source=post_page-----7c5eb1f09c7c---------------------------------------

which covers the full LadybugDB schema design — bipartite graphs, Semantic Spacetime ontology, Promise Graphs, hypergraphs, metagraphs, and agentic memory architecture. The Leanpub edition gets updates