Knowledge-Layer Scoring and Visibility Plan
Status: Proposed
Date: April 15, 2026
Scope: Replace hardcoded Ebbinghaus memory-tier decay behavior with a generic, profile-and-policy-driven decay and scoring system that can support existing, proposed, or future decay models, while expressing promotive declarative tiers through separate promotion profile and policy subsystems, supporting MVCC-aware score-from selection for both nodes and edges, implementing efficient deindexing for visibility-suppressed nodes and edges, persisting ON ACCESS mutation state in a separate accessMeta index so that nodes and edges remain read-only during policy evaluation, evaluating scoring before query visibility so that invisible entities are suppressed from queries unless accessed through reveal(), and resolving promotion policies before decay profiles.
1. Objective
Implement a flexible decay and scoring architecture in NornicDB where retention behavior is resolved from policies rather than hardcoded cognitive tiers.
The system must support:
- no-decay entities and properties
- configurable decay half-lives and thresholds
- node-, edge-, and property-level decay behavior
- named policy presets for operator convenience
- separate promotion policies that declaratively model tier-like score boosts by referencing promotion profiles, without changing the existing Cypher scoring API
- declarative MVCC-aware score-from selection through decay profile options
- future decay models without requiring new engine enums or switch statements
- efficient visibility suppression of whole nodes and whole edges
- asynchronous removal of suppressed nodes and suppressed edges from indexing
- property-level decay effects that can exclude properties from vectorization or retrieval surfaces without suppressing, moving, or deleting those properties from storage
Nodes and edges must be treated as first-class decay targets. A node or edge must be able to decay, be scored, be suppressed from retrieval, be removed from indexing, and be promoted using the same policy-driven machinery.
Properties are not suppression targets. Properties may receive decay scores and vectorization-exclusion behavior, but they remain stored in place and remain directly queryable through Cypher.
This plan is intentionally model-agnostic. It is not tied to any one research paper or taxonomy. Although inspired by this research paper which called out NornicDB specifically. https://arxiv.org/pdf/2604.11364
2. Problem Statement
NornicDB currently has memory-decay behavior that depends on fixed tier names and fixed decay assumptions. That makes the system harder to evolve because retention logic is embedded in runtime code rather than expressed declaratively.
That creates six engineering problems:
- Adding new retention behavior requires code changes instead of policy changes.
- The engine assumes a closed set of decay categories.
- Decay is primarily entity-wide instead of being expressible at node, edge, and property scope.
- Operators cannot declare retention semantics through the same schema-oriented mechanisms they already use elsewhere.
- Under MVCC, decay scoring needs an explicit start-time anchor unless the policy states whether score age begins at entity creation time or at the latest visible version time.
- Suppressed nodes and edges must be removed from indexing efficiently, without expensive full-index scans, while property-level decay behavior must not be confused with whole-entity suppression.
The system should instead treat decay behavior as configurable retention profiles, promotion behavior as separate configurable scoring profiles and policies, score start time as an explicit profile decision, and deindex cleanup as a dedicated deindex workflow for nodes and edges only.
3. Design Principles
- Retention behavior must be data-driven, not hardcoded into a fixed enum.
- Decay and scoring must be resolvable at node, edge, and property scope.
NO DECAY must be directly expressible in policy definitions.
- Decay half-life, decay function, visibility threshold, and score floor must be configurable independently.
- Promotion tiers must be expressible declaratively through separate promotion profile and promotion policy subsystems rather than through hardcoded runtime categories.
- Score start time must be declaratively expressible through decay profile options using
CREATED, VERSION, or CUSTOM.
- Nodes and edges must be handled symmetrically by the policy system. Edge decay must not be a second-class or special-case feature.
- Suppression behavior applies only to whole nodes and whole edges, never to individual properties.
- Property-level decay may influence vectorization, ranking, filtering, reranking, and summarization, but it must not move, suppress, or delete stored property values.
9a. Properties that participate in structural indexes (lookup indexes, range indexes, and composite indexes) are immune to decay scoring, decay hiding, and property-level exclusion. Fulltext indexes and vector indexes are retrieval-surface indexes and do not confer property immunity. Indexed properties must remain stable and always visible because they are relied upon for aggregation, joining, and lookup.
- Suppressed nodes and edges must be removed from indexing using exact-key deindexing rather than discovery by scanning secondary indexes.
- Runtime paths must not silently fall back to legacy tier assumptions.
- Named presets may exist for convenience, but the engine must operate on resolved profiles and policies.
- The architecture must be flexible enough to support any current or future decay model.
4. Target Architecture
4.1 Decay Profile Layer
Decay profiles are the mechanism that decides whether decay applies, at what rate, at what scope, and from which score start time decay age is measured. Decay profiles are the only decay authoring surface — there is no separate decay policy concept.
Required behavior:
- resolve effective decay profile from configuration and profile definitions
- support node-, edge-, and property-level targeting
- allow
NO DECAY and rate-based decay without relying on fixed tier names
- permit named presets but not require them
- support multiple decay functions over time
- support score start-time selection through profile options
- resolve suppression eligibility for whole nodes and whole edges
- resolve property-level vectorization-exclusion behavior without treating properties as suppression targets
- reject or ignore property-level decay rules that target properties participating in structural indexes (lookup, range, and composite indexes); indexed properties are immune to decay scoring and hiding; fulltext and vector indexes do not confer immunity
- enforce at most one decay profile per unique target as a hard constraint
Suggested fit in NornicDB:
- shared profile resolver used by recall, recalc, suppression pass, and ranking paths
- config-defined presets for operator convenience
- schema-backed decay profiles as the main control surface
- diagnostics that explain why a given node or edge resolved to a given decay profile and score start time
4.2 Promotion Layer
Promotion behavior is split into two object types: profiles and policies.
Promotion profiles are named parameter bundles (multiplier, score floor, score cap, scope). They contain no logic and cannot be targeted to entities directly. They are referenced by name inside promotion policy APPLY blocks.
Promotion policies contain logic — FOR targets, APPLY blocks, WHEN predicates, and optional ON ACCESS mutation blocks. Policies bind profiles to specific node labels, edge types, and property paths. Promotion policies are resolved first, before decay profile resolution. WHEN predicates are evaluated before ON ACCESS mutations — if the entity is visibility-suppressed (below the visibility threshold), ON ACCESS mutations do not execute. This prevents suppressed entities from accumulating access state they should not have. The promotion adjustments are applied to the base decay score to produce the final score without changing the existing Cypher scoring API.
Required behavior:
- resolve applicable promotion profiles through promotion policy evaluation
- support node-, edge-, and property-level targeting
- allow promotion profiles to declare score multipliers, caps, and floors
- when multiple WHEN predicates match within a policy, the profile with the highest effective multiplier wins deterministically
- keep promotion profiles separately authored, shown, and retrieved from promotion policies
- support optional
ON ACCESS mutation blocks that execute when the target is accessed during scoring resolution, but only after WHEN predicates have been evaluated and only if the entity passes the suppression gate (is not visibility-suppressed); ON ACCESS mutations write exclusively to a separate accessMeta index keyed to the target node or edge, never to the node or edge itself
- enforce at most one promotion policy per unique target as a hard constraint
Suggested fit in NornicDB:
- a dedicated promotion subsystem with its own catalog and DDL for both profiles and policies
- a separate accessMeta index that stores
ON ACCESS mutation state per target node or edge as map[string]interface{}, serialized in msgpack alongside other data files for performance
- ON ACCESS mutations are accumulated in-process and flushed asynchronously; ON ACCESS blocks are syntactic sugar for declaring which counters and timestamps the accumulator tracks — they are not executed as literal Cypher statements on every read
- the hot path (read time) buffers ON ACCESS increments in a per-entity sharded counter ring (
[N]atomic.Int64, N = number of shards, e.g. 64), keyed by hash(entityID) % N; each shard holds a delta, not an absolute value; no msgpack, no Badger write, no allocation; the read path sees storedValue + pendingDelta via a single atomic load
- the cold path (flush) is a background goroutine that drains the counter ring on a configurable interval (default: 1s–5s); it reads each non-zero shard, atomically swaps it to zero, and applies a single batched Badger write that merges the deltas into the persisted accessMeta entries; this is the only path that does msgpack round-trips
- atomic increments on the shard eliminate lost updates; the flush goroutine is the sole writer to Badger accessMeta keys, eliminating write contention
- timestamp fields (
lastAccessedAt, lastTraversedAt) are stored as atomic.Int64 (UnixNano) in the same shard struct; the flush writes the latest value, not an accumulation
- access counts are eventually consistent with a bounded lag of one flush interval; WHEN predicates that read
n.accessCount see persisted + buffered delta by reading through the accumulator, not Badger
- shared runtime scoring that first resolves the promotion policy, then resolves the decay profile, then applies promotion adjustments to the base decay score
- property reads within
ON ACCESS blocks and WHEN predicates resolve from accessMeta first, falling back to the node or edge's stored properties
- diagnostics that explain which promotion policy matched, which profile was selected, and how it affected the final score
4.3 Authoring Subsystem Layer
The authoring subsystem is the surface for declaring decay profiles and promotion profiles and policies.
Required behavior:
- allow operators to declare decay profiles in Cypher
- allow operators to declare promotion profiles and promotion policies in Cypher
- validate definitions at creation time where applicable
- expose profiles and policies through introspection and admin APIs
- enforce one decay profile and one promotion policy per unique target
- support property-targeted rules in addition to node and edge targets
Suggested fit in NornicDB:
- introduce a dedicated decay profile subsystem with its own catalog and DDL
- introduce a dedicated promotion subsystem with its own catalog and DDL for both profiles and policies
- borrow authoring, validation, and introspection patterns from the constraint subsystem without making decay or promotion rules first-class constraints
- express property-level retention and promotion as inline entries inside profile or policy bodies
- add retention-specific and promotion-specific resolution rules alongside existing schema rules
4.4 Runtime Resolution Layer
The runtime resolution layer converts configuration and profiles into effective decay behavior and final score for a node, edge, or property. Scoring happens before query visibility — a node or edge must be scored before it becomes visible to the query.
Required behavior:
- evaluate the matching promotion policy first during recall, reinforcement, recalc, suppression pass, and ranking; evaluate
WHEN predicates to determine the entity's promotion tier and whether it is suppressed; execute ON ACCESS mutations only if the entity passes the suppression gate (is not visibility-suppressed after promotion and decay resolution)
- resolve decay profile second during recall, reinforcement, recalc, suppression pass, and ranking
- resolve score start time from decay profile during score evaluation
- compute the final score from promotion and decay resolution before determining query visibility
- suppress nodes, edges, and properties from query results when their final score renders them invisible, unless the caller uses
reveal() to bypass scoring-driven visibility
- support explicit overrides and inheritance
- allow property-level state without forcing entity-wide decay
- resolve inline property entries from the active decay profile before falling back to entity defaults
- expose final decay score through native Cypher functions without changing Neo4j-compatible node or relationship result shapes
- expose raw stored entities through
reveal() without decay-driven visibility filtering or property hiding
- avoid duplicated logic across CLI, DB, and API code paths
Suggested fit in NornicDB:
- one shared resolver used by DB runtime, CLI decay tools, Cypher procedures, and background maintenance
- one explanation format returned by diagnostics and admin endpoints
- one shared scorer that evaluates promotion first, then computes base score from decay profile, then applies promotion adjustments to produce the final score
- one shared MVCC-aware score-start resolver that interprets
CREATED, VERSION, and CUSTOM
- one
reveal() bypass path that returns the raw stored entity, skipping scoring-driven visibility and property hiding
4.5 MVCC Interaction Layer
MVCC version resolution and decay scoring are separate concerns, but scoring gates query visibility. MVCC determines which version of an entity exists at the transaction snapshot. Scoring then determines whether that version is visible to the query.
Required behavior:
- resolve the visible node, edge, or property version using the transaction snapshot
- evaluate promotion policy and decay profile on the resolved version before exposing the entity to the query
- evaluate the base decay score using the score start time resolved from decay profile
- suppress entities whose final score falls below the visibility threshold from query results, search hits, and traversal paths
- support
CREATED, where decay age begins at the entity's original creation timestamp
- support
VERSION, where decay age begins at the latest visible version timestamp under MVCC
- allow
reveal() to bypass scoring-driven visibility and return the MVCC-resolved version without suppression
- never require new stored versions solely because a derived score changed over time
Suggested fit in NornicDB:
- version resolution remains owned by MVCC
- score start-time choice remains owned by decay profile
- the shared scorer consumes both the visible node or edge version and the profile-resolved score start time
- query visibility is determined after scoring: MVCC resolves the version, scoring determines whether it appears
4.6 Visibility Suppression and Deindex Layer
The visibility suppression and deindex layer is the mechanism that removes suppressed whole nodes and whole edges from indexing in the most performant way possible.
Required behavior:
- suppress only whole nodes and whole edges
- never suppress, move, or delete individual properties because of decay profile
- mark suppressed nodes and edges in primary storage immediately
- remove suppressed nodes and edges from indexing asynchronously
- avoid discovering stale index entries by scanning entire secondary indexes
- support a configurable background cleanup cadence, defaulting to nightly but configurable in seconds
- ensure suppressed nodes and edges are skipped efficiently during retrieval
- allow property-level vectorization exclusion without storage relocation or Cypher inaccessibility
Suggested fit in NornicDB:
- maintain a per-node and per-edge index-entry catalog that stores the exact secondary-index keys written for that entity
- when a node or edge becomes suppressed, enqueue a deindex work item referencing that entity and its index-entry catalog
- have the background deindex job drain deindex work items and perform blind batched deletes against index keys
- keep read-time suppressed checks cheap so suppressed entities are skipped even before asynchronous deindex completes
- treat physical space reclamation as separate storage maintenance rather than part of logical suppression/deindex semantics
5. Logical Resolution Model
Because decay scores are derived rather than stored on fields, this section describes runtime resolution artifacts and schema objects, not a stored score data model.
5.1 Schema Objects
DecayProfile
Database object used to define reusable decay parameter bundles. Profiles contain no logic — they declare configuration values only.
Minimum fields:
- profile id
- profile name
- half-life definition in seconds
- scoring function or strategy id
- score start time:
CREATED, VERSION, or CUSTOM
- custom score-from property path, if
CUSTOM
- visibility threshold override for node or edge suppression eligibility
- minimum score floor
- scope type: node, edge, property
- enabled or disabled
PromotionProfile
Database object used to define reusable promotive scoring parameter bundles. Profiles contain no logic — they declare configuration values only.
Minimum fields:
- profile id
- profile name
- score multiplier
- optional score floor override
- optional score cap override
- scope type: node, edge, property
- enabled or disabled
PolicyBackedDecayRule
Logical rule compiled from decay profile definitions and used by the resolver.
Minimum fields:
- contract name
- policy name
- entity target: label or edge type
- property path, if any
- rule kind: no-decay, policy, rate, threshold, floor, function
- referenced policy name, if any
- inline rule order for deterministic precedence
- original expression text for diagnostics
PolicyBackedPromotionRule
Logical rule compiled from promotion policy definitions and used by the resolver.
Minimum fields:
- contract name
- policy name
- entity target: label or edge type
- property path, if any
- rule kind: promotion-profile, multiplier, floor, cap
- referenced policy name, if any
- predicate expression
- inline rule order for deterministic precedence
- original expression text for diagnostics
AccessMeta
Persistent metadata index that stores ON ACCESS mutation state separately from the node or edge it describes. Each entry is a map[string]interface{} keyed to a target node or edge identifier. AccessMeta entries are serialized in msgpack alongside other data files for performance.
Nodes and edges are read-only during ON ACCESS evaluation. All writes within an ON ACCESS block mutate the target's accessMeta entry, never the target's stored properties. All reads within ON ACCESS blocks and WHEN predicates resolve from the target's accessMeta entry first, falling back to the target's stored properties when the key is not present in accessMeta. The stored(entity) qualifier may be used inside WHEN predicates and ON ACCESS blocks to force a read from stored node or edge properties, bypassing accessMeta-first resolution. stored() is the escape hatch for properties managed by external processes and is not a general Cypher function.
AccessMeta has a fast-path fixed-layout struct for the most common fields (accessCount int64, lastAccessedAt int64, traversalCount int64, lastTraversedAt int64). Only custom keys fall back to the map[string]interface{} overflow map. The fixed-layout struct serializes to a known-size byte slice with no reflection. msgpack is used only for the overflow map of custom keys. Pre-allocated per-entity byte buffers in the flush goroutine are reused across iterations (sync.Pool or ring buffer).
All integer values in accessMeta are normalized to int64 and all floating-point values to float64 at deserialization time. This normalization ensures that Cypher arithmetic in ON ACCESS and WHEN blocks operates on consistent types — coalesce(n.accessCount, 0) + 1 always works because both operands are int64. Boolean values remain bool. String values remain string. time.Time is stored as int64 (UnixNano) and converted on read.
ON ACCESS mutations are not executed as literal Cypher writes on every read. They are accumulated in-process via a sharded counter ring and flushed asynchronously to Badger in batches. See section 4.2 for the accumulator design.
Minimum fields:
- target id
- target scope: node or edge
- metadata map:
map[string]interface{}
- last accessed at
- last mutated at
- mutation count
AccessMeta Lifecycle
When a node or edge is deleted (tombstoned in MVCC), its accessMeta entry is enqueued for deletion in the same transaction. The accessMeta key is deleted immediately from the in-process accumulator and enqueued as a deindex work item alongside any index-entry catalog cleanup.
When a node or edge is suppressed, its accessMeta entry is retained — suppressed entities are still accessible via reveal(), and policy() on a revealed entity should still return its access history. The accessMeta entry is only deleted when the entity is physically reclaimed by the compliance retention lifecycle.
When MVCC version pruning removes all versions of an entity, its accessMeta entry is eligible for deletion. The PruneMVCCVersions function should check for orphaned accessMeta entries and delete them.
AccessMeta keys use a dedicated prefix (prefixAccessMeta) so that orphan detection is a prefix scan bounded to accessMeta, not a full database scan.
AccessMeta is included in MVCC snapshot isolation. Updates are atomic but the snapshot is always as of the transaction time.
IndexEntryCatalog
Persistent catalog of exact index entries created for a node or edge.
Minimum fields:
- target id
- target scope: node or edge
- index entry key list or catalog reference
- index family identifiers, if partitioned
- last indexed version, if tracked
- suppressed boolean or state marker, if duplicated for cleanup convenience
DeindexWorkItem
Persistent background work item used to deindex a visibility-suppressed node or edge.
Minimum fields:
- work item id
- target id
- target scope: node or edge
- suppression state
- enqueued at
- next attempt at
- retry count
- cleanup status
- index catalog reference or direct key reference
5.2 Derived Runtime Artifacts
ScoringResolution
Derived resolution result produced by the shared resolver for a requested node, edge, or property.
Minimum fields:
- target id
- target scope
- resolved decay profile id
- resolved score start time
- resolution source chain
- applied decay profile names
- applied decay profile entries
- applied promotion policy name
- applied promotion profile name selected by the policy
- effective rate
- effective threshold
- effective multiplier
- base score
- final score
- no-decay boolean
- suppression-eligible boolean for node or edge targets only
DecayResolutionMeta
Derived metadata emitted at read time for Cypher and unified search surfaces.
Minimum fields:
- entity id
- entity scope: node or edge
- entity decay score, if applicable
- score start time
- per-property resolved score map
- optional per-property explanation payload
5.3 Design Rule
- derived scores are not persisted into node, edge, or property payloads
- the shared resolver is the source of truth for node-, edge-, and property-level scoring
- Cypher functions and unified search metadata project derived scores outward without mutating stored graph data
- the existing Cypher scoring API remains unchanged; resolved promotion policies affect the returned score through the shared scorer rather than through new function signatures
- the score start time is resolved from decay profile and used by the shared scorer without changing the existing Cypher scoring API
- whole-node and whole-edge suppression state is persisted
- property suppression state is not persisted because properties are not suppression targets
- property-level decay may exclude properties from vectorization or retrieval surfaces but must not move or delete stored property values
ON ACCESS mutation state is persisted in a separate accessMeta index keyed per target node or edge, not on the node or edge itself
- accessMeta entries are
map[string]interface{} serialized in msgpack alongside other data files for performance
- nodes and edges are read-only during
ON ACCESS evaluation; all writes target the accessMeta index
- property reads within
ON ACCESS blocks and WHEN predicates resolve from accessMeta first, falling back to stored node or edge properties
- the
policy() Cypher function projects accessMeta outward without implying that access-tracking metadata is stored on the node or edge
6. Query and Resolution Semantics
6.1 Resolution Rules
Scoring happens before query visibility. When a query touches a node or edge, the engine must resolve and apply promotion and decay scoring before deciding whether the entity is visible to the query. An entity whose final score falls below the visibility threshold or whose decay profile renders it invisible must not appear in MATCH results, WHERE evaluation, or search hits unless the caller explicitly uses reveal(entity) to bypass scoring-driven visibility.
The resolution order is: promotion first, then decay, then score-start resolution, then visibility determination.
Every scoring-aware read or maintenance operation should resolve the promotion policy first, in this order:
- property-level promotion policy entries that match the target
- entity-level promotion policy entries that match the target
- edge-type or label-targeted promotion policy
- wildcard-targeted promotion policy (
FOR (n:*) or FOR ()-[r:*]-())
- configured default promotion behavior, if any
Then every scoring-aware operation should resolve the decay profile in this order:
- explicit no-decay rule
- property-level inline rule inside the applicable decay profile
- entity-level rule inside the applicable decay profile
- edge-type or label-targeted decay profile
- wildcard-targeted decay profile (
FOR (n:*) or FOR ()-[r:*]-())
- configured default decay profile
Then every score-aware read should resolve the score start time from the resolved decay profile:
CREATED, if the resolved decay profile declares CREATED
VERSION, if the resolved decay profile declares VERSION
CUSTOM, if the resolved decay profile declares CUSTOM with a scoreFromProperty path; the property is resolved from accessMeta first, falling back to stored node or edge properties; if the resolved value is null or unparsable, log a warning and fall back to entity creation time
- configured default score start time, if no explicit profile value applies
Then the engine computes the final score and determines visibility:
- compute the base decay score from the resolved decay profile and score start time
- apply the resolved promotion policy adjustments to produce the final score
- if the final score falls below the visibility threshold, the entity is invisible to the query unless accessed through
reveal(); ON ACCESS mutations do not execute for suppressed entities
- if property-level decay excludes a property from retrieval surfaces, that property is hidden from the query result unless accessed through
reveal()
- properties that participate in structural indexes (lookup, range, and composite indexes) are never subject to steps 18 or 19 — they are immune to decay scoring, decay hiding, and property-level exclusion regardless of any matching decay profile or promotion policy; fulltext and vector indexes do not confer this immunity
If no promotion policy matches, the target should resolve with a neutral promotion effect.
If no decay profile matches, the engine should either treat the target as non-decaying or use an explicit configured default decay profile, but it must not silently assume any legacy tier.
If no score start time matches, the engine should use an explicit configured default. The recommended default is VERSION.
Compiled Binding Tables and Lazy Scoring
The resolution cascade above is the logical model. The implementation pre-flattens it at DDL time using a three-tier optimization strategy.
Tier 1 — Compile-time profile binding table. When a decay profile or promotion policy is created, altered, or dropped, the schema manager builds a direct lookup table: map[string]*compiledBinding keyed by label or edge type. Each compiledBinding holds the resolved decay profile pointer, the resolved promotion policy pointer, the visibility threshold, the score-start mode, and the decay function pointer. Wildcard entries are expanded into per-label/per-type entries at compile time. Resolution at query time is a single map lookup — no cascade. The table is rebuilt on any DDL change, which is rare. For multi-label nodes, the table keys on sorted label sets, not individual labels.
Tier 2 — Suppressed-bit fast path. Suppressed entities already have a persisted marker in primary storage. The read path checks the suppressed bit before any profile resolution. If suppressed and the query does not use reveal(), skip immediately. Cost: one byte check. This eliminates full resolution for the entire suppressed population.
Tier 3 — Amortized score computation. For non-suppressed entities with exponential decay, the score is a pure function of (now - scoreFrom, halfLife). Pre-compute a score threshold timestamp: thresholdAge = -halfLife * ln(visibilityThreshold) / ln(2). At read time, compare now - scoreFrom > thresholdAge using integer subtraction on UnixNano values — no math.Exp() needed for the visibility check. Only compute the precise float64 score when the entity survives visibility and is projected into results (lazy scoring). This reduces the hot path to one integer comparison per entity. The thresholdAge is computed once at compile time per decay profile and stored as int64 nanoseconds in the compiled binding.
For ORDER BY decayScore(n), the scorer can use a monotonic proxy: scoreFromTime.UnixNano() itself is monotonically related to the decay score (newer = higher score) for a fixed half-life and function. Sorting by scoreFromTime DESC is equivalent to sorting by decayScore ASC without computing any exponentials. The precise score is only needed if the caller mixes decayScore() with other expressions in ORDER BY.
Multi-Label Node Resolution
If a node has multiple labels (e.g., :SessionRecord:MemoryEpisode) and separate decay profiles exist for both labels, the following rules apply:
- When a
CREATE DECAY PROFILE ... FOR (n:LabelA) is issued, the schema manager checks whether any existing node in the database has both :LabelA and another label that already has a targeted binding. If so, the CREATE fails with: "Conflict: nodes with labels [:LabelA, :LabelB] would match two decay profiles. Create a dedicated profile for the multi-label combination or drop one of the conflicting profiles."
- If the operator explicitly wants multi-label handling, they create a profile targeting the multi-label combination:
FOR (n:SessionRecord:MemoryEpisode). A multi-label target takes precedence over any single-label target.
- At query time, if a multi-label node somehow matches multiple bindings (e.g., a label was added after profile creation), resolution picks the binding with the most specific (most labels) target. If two bindings have equal specificity, the resolver returns an error logged as a diagnostic warning. The node is treated as non-decaying until the conflict is resolved.
- The compiled binding table handles this by keying on sorted label sets, not individual labels.
6.2 MVCC Score Start-Time Semantics
The engine should support three profile-declared score start times:
These semantics must apply equally to nodes and edges.
CREATED
CREATED means the decay age is measured from the entity's original creation timestamp.
Semantics:
- MVCC determines which node or edge version is visible at the transaction snapshot
- the scorer uses the original creation timestamp as the start of decay age
- later updates do not reset decay age
CREATED is the durable, age-from-origin option
VERSION
VERSION means the decay age is measured from the latest visible version timestamp under MVCC.
Semantics:
- MVCC still determines which node or edge version is visible at the transaction snapshot
- the scorer uses the latest visible version timestamp as the start of decay age
- updates reset decay age for the visible target
VERSION is the freshness-from-last-change option
CUSTOM
CUSTOM means the decay age is measured from a user-specified property value on the entity.
Semantics:
- MVCC still determines which node or edge version is visible at the transaction snapshot
- the scorer reads the property path declared in the decay profile's
scoreFromProperty option using accessMeta-first resolution: the property is resolved from the target's accessMeta entry first, falling back to the target's stored node or edge properties only when the key is not present in accessMeta
- the property value must be a timestamp; if the resolved value is missing, null, or not parsable as a timestamp, the scorer should log a warning and fall back to the entity's original creation time
CUSTOM is the operator-defined, domain-specific option
Rule
Visibility is always snapshot-based. Only the decay-age start time changes.
The scoring timestamp ("now") is the transaction's MVCC snapshot timestamp for query paths, or the maintenance cycle start time for background paths. The scorer does not call time.Now(). The scorer receives the snapshot timestamp from the transaction context. This ensures deterministic, repeatable scoring within a transaction: the same entity queried twice in the same transaction returns the same score. The snapshot timestamp is already available in the MVCC read path (MVCCVersion.CommitTimestamp). It is passed through to the scorer as scoringTime. For background maintenance (recalc, suppression pass), scoringTime is time.Now() at the start of the maintenance cycle, frozen for the duration of the batch.
The system must not create new stored versions solely because a derived score changed.
6.3 Property-Level and Edge-Level Semantics
Property-level decay is required for mixed-longevity entities.
Examples:
- a
Profile node may keep name and tenantId permanently while decaying lastConversationSummary
- a
Task edge may keep identity and timestamps permanently while decaying a transient confidence field
- a
Document node may keep canonical content permanently while decaying ranking hints or ephemeral summaries
- a
CO_ACCESSED edge may decay as a whole, even if neither endpoint node decays at the same rate
Edge-level decay should support at least these outcomes:
- lowering ranking weight for an edge during retrieval or traversal
- suppression or hiding of an edge while preserving endpoint nodes
- edge-specific decay independent of the decay profile of connected nodes
Property decay should support at least these outcomes:
- lower ranking weight for the property during retrieval
- exclusion of the property from vectorization or vector-backed retrieval if policy says so; when the
AccessFlusher detects that a property's score has crossed below its visibility threshold during a flush cycle, it writes an explicit nil for that property key in the entity's AccessMetaEntry.Overflow map and re-queues the node for embedding via InvalidateManagedEmbeddings + AddToPendingEmbeddings; the embed worker applies accessMeta-first projection over node properties before building embed text — any property that resolves to explicit nil after projection is excluded from the embed text; when a property's score rises above the threshold (e.g., via promotion), the flusher removes the nil key from the overflow map and re-queues the node, and the next embed cycle includes the property again; no separate background scorer or persisted suppression list is needed — the overflow map nil convention is the suppression signal
- explicit supersession or replacement behavior in retrieval logic, if configured
Properties that participate in structural indexes (lookup indexes, range indexes, and composite indexes) are immune to property-level decay scoring, decay hiding, and vectorization exclusion. These properties must remain stable and always visible to queries because index-backed operations depend on their values being present and consistent. Fulltext indexes and vector indexes are retrieval-surface indexes and do not confer property immunity — property-level decay may exclude a property from a vector index or fulltext search without breaking aggregation or joins. If a decay profile or promotion policy contains a property-level rule that targets a property participating in a structural index, the engine should reject the rule at creation time with a validation error.
Property-level promotion should support at least these outcomes:
- higher ranking weight for the property during retrieval
- tier-like score boosts for reinforced or validated properties
- score floor or cap adjustments without changing the parent entity's stored fields
Property-level scores should only influence retrieval when the property is directly involved in matching, ranking, reranking, filtering, projection, summarization, vectorization, or vector-backed retrieval. A decayed or promoted property should not silently degrade or improve the score of the entire entity by default.
Edge decay should not be inferred from node decay by default. An edge must be able to decay on its own policy terms even if both endpoint nodes are non-decaying.
Properties are not suppression targets. A property with a low score for vectorization may be excluded from vectorization outputs or vector-backed retrieval, but it remains stored in place and directly queryable in Cypher.
6.4 Suppression Semantics
Visibility suppression applies only to whole nodes and whole edges.
When a node or edge crosses suppression eligibility:
- the node or edge may be marked suppressed in primary storage
- the node or edge should be skipped by retrieval and ranking paths as efficiently as possible
- the node or edge must be removed from secondary indexing asynchronously
- the system must not scan secondary indexes to discover which entries to remove
- the system should use the target's stored index-entry catalog to perform direct key deletion
Property-level decay must not cause property suppression, property movement, or property deletion from storage.
If a node remains indexed, its properties remain indexable under ordinary indexing rules. Property-level decay affects retrieval and vectorization behavior, not whether the property exists in storage. Properties that participate in structural indexes (lookup, range, and composite indexes) are entirely immune to decay scoring, decay hiding, and vectorization exclusion — they must remain stable and always visible for aggregation, joining, and lookup. Fulltext indexes and vector indexes are retrieval-surface indexes and do not confer this immunity.
6.5 Decay Function Semantics
The engine should support multiple decay function identifiers over time.
Initial supported scoring modes can include:
exponential
linear
step
none
The engine should resolve these as runtime scoring behavior, not as special categories.
These scoring modes should be accepted both:
- from resolved decay profile and constraint configuration, and
- from an explicit Cypher options object on decay scoring functions.
Cypher may override the profile-resolved scoring mode for the scope of that scoring expression only. Unified retrieval should not expose that override surface and should remain profile-resolved.
6.6 Promotion and Decay Resolution Order
Promotion policies are evaluated first. The promotion policy for the target is resolved and its WHEN predicates are evaluated before decay profile resolution begins. WHEN predicates determine which promotion profile applies and what tier the entity is in.
After promotion resolution, the decay profile is resolved and the base decay score is computed. The promotion adjustments are then applied to the base decay score to produce the final score. The final score determines query visibility.
ON ACCESS mutations execute after the final score and visibility determination. If the entity's final score falls below the visibility threshold (i.e., the entity is suppressed), ON ACCESS mutations do not execute. This prevents suppressed entities from accumulating access state — a suppressed entity should not record "accesses" that only occurred because the scorer was evaluating it, not because a user or query actually retrieved it. ON ACCESS mutations reflect genuine access by a visible entity, not internal scoring housekeeping.
The evaluation order is:
- Resolve
WHEN predicates from the matching promotion policy (determines promotion tier)
- Resolve the decay profile and compute the base decay score
- Apply promotion adjustments to produce the final score
- Determine visibility: is the final score above the visibility threshold?
- Only if visible: execute
ON ACCESS mutations (increment access counts, set timestamps, etc.)
- Flush
ON ACCESS deltas to accessMeta asynchronously
The normative formula for final score computation is:
promotedScore = baseDecayScore × promotionMultiplier
flooredScore = max(promotedScore, promotionFloor)
cappedScore = min(flooredScore, promotionCap)
finalScore = max(cappedScore, decayFloor)
Where:
baseDecayScore is the output of the decay function (e.g., exp(-t * ln(2) / halfLife))
promotionMultiplier, promotionFloor, promotionCap come from the matched promotion profile (defaults: 1.0, 0.0, 1.0)
decayFloor comes from the decay profile's DECAY FLOOR directive (default: 0.0)
Order of operations: multiply → floor → cap → decay floor. The decay floor is applied last because it is a hard minimum from the decay profile, independent of promotion.
If no promotion policy matches, promotionMultiplier = 1.0, promotionFloor = 0.0, promotionCap = 1.0, and the formula reduces to max(baseDecayScore, decayFloor).
When multiple WHEN predicates match within the same promotion policy, the profile with the highest effective multiplier wins. This is deterministic and does not require an explicit composition directive.
6.7 Explainability
For any entity or property, the system should be able to explain:
- whether decay applies
- which decay profile was selected
- which promotion policy matched and which profile was selected
- which score start time was selected
- which decay profile and inline rule selected it
- which promotion policy entry and WHEN predicate selected the profile
- what rate, threshold, floor, and multiplier are active
- whether decay age was measured from
CREATED, VERSION, or CUSTOM and which property path was used if CUSTOM, whether the value was resolved from accessMeta or stored properties, and whether a fallback to entity creation time occurred due to a null or unparsable value
- why a node or edge was suppressed or not suppressed
- why a node or edge was deindexed or pending deindex
- why a property was excluded from vectorization or retrieval surfaces without being suppressed
- whether a property is immune to decay because it participates in a structural index (lookup, range, or composite)
6.8 Native Cypher Access
The decay subsystem should expose scoring through native Cypher functions so callers can inspect resolved scores without altering Neo4j-compatible node or relationship structures.
Proposed functions:
decayScore(entity) returns the effective scalar decay score for a node or edge
decayScore(entity, { scoringMode: 'linear' }) returns the effective scalar decay score for a node or edge using the requested scoring mode
decayScore(entity, { property: 'summary' }) returns the effective scalar decay score for a specific property on that node or edge
decayScore(entity, { property: 'summary', scoringMode: 'step' }) returns the effective scalar decay score for a specific property using the requested scoring mode
decay(entity) returns a structured decay object for the node or edge
decay(entity, { scoringMode: 'linear' }) returns a structured decay object for the node or edge using the requested scoring mode
decay(entity, { property: 'summary' }) returns a structured decay object for the requested property
decay(entity, { property: 'summary', scoringMode: 'step' }) returns a structured decay object for the requested property using the requested scoring mode
The options-object shape avoids ambiguous string overloads. property and scoringMode are named keys rather than positional string arguments.
The structured decay(...) result should always expose a Cypher-accessible .score field so callers can write concise expressions without needing a second helper function when they want richer metadata.
Suggested fields on decay(...) results:
score
policy
scope
function
visibilityThreshold
floor
applies
reason
scoreFrom
The decay(...) object is a derived value. It should not imply that score metadata is being persisted back onto the node, edge, or property itself.
If a caller invokes decayScore(...) or decay(...) for a target with no matching policy, the function should return the non-decaying/default result rather than failing. The default scalar should be 1.0, and the structured form should report a neutral non-decaying result.
The existing Cypher scoring API remains unchanged. The score returned by decayScore(...) and decay(...).score is the final resolved score after applying the decay profile, the profile-declared score start time, and the matching promotion policy.
The promotion policy subsystem should expose accessMeta through a native Cypher function so callers can inspect access-tracking state without altering Neo4j-compatible node or relationship structures.
Proposed function:
policy(entity) returns the accessMeta map for the node or edge as a structured Cypher object
There is no correlated policyScore() scalar function. Unlike decay() / decayScore(), the accessMeta map is a general-purpose key-value store with no single canonical scalar to extract. Callers access individual keys through standard Cypher property access on the returned map, for example policy(n).accessCount or policy(r).traversalCount.
Suggested fields on policy(...) results:
- all keys present in the target's accessMeta entry, projected as a Cypher map
_targetId: the target node or edge identifier
_targetScope: node or edge
_lastAccessedAt: timestamp of the most recent node access
_lastMutatedAt: timestamp of the most recent ON ACCESS mutation
_mutationCount: total number of ON ACCESS mutations applied
The policy(...) object is a derived value read from the accessMeta index. It does not imply that access-tracking metadata is stored on the node or edge itself.
If a caller invokes policy(...) for a target with no accessMeta entry, the function should return an empty map with only the _targetId and _targetScope fields rather than failing.
The scoring subsystem should expose a bypass function so callers can retrieve the raw stored entity without decay-driven visibility filtering or property hiding.
Proposed function:
reveal(entity) returns the raw stored node or edge as it exists in primary storage, bypassing all scoring-driven visibility suppression and property-level decay hiding
reveal() is a plan-level visibility bypass marker, not a runtime function. It does not disable scoring — the entity still has a resolved score. It disables the visibility gate that would otherwise hide the entity or its properties from the query result. reveal() is the only mechanism to access entities that are invisible due to scoring. It does not affect decayScore(), decay(), or policy() — those functions still return the resolved values.
When the query planner detects reveal(variable) anywhere in the query (RETURN, WITH, WHERE, or ORDER BY), it marks that variable's binding as visibility-bypassed during plan compilation. A visibility-bypassed binding skips scoring-driven suppression at MATCH time. The entity is always materialized. Its score is still computed (so decayScore() and decay() return correct values), but the visibility gate is disabled for that binding. This is equivalent to the planner rewriting MATCH (m:MemoryEpisode) RETURN reveal(m) into a plan where m's scan does not apply the visibility filter.
If reveal() is used on one variable but not another in the same query, only the revealed variable bypasses visibility. Example: MATCH (m:MemoryEpisode)-[:EVIDENCES]->(k:KnowledgeFact) RETURN reveal(m), k — m bypasses visibility, k does not.
reveal() with no downstream usage in the query is a no-op (standard dead-code elimination). reveal() wrapping an already-visible entity is a no-op at runtime.
When reveal() is used, the returned entity includes all stored properties, including any that would normally be hidden by property-level decay exclusion. The entity appears in query results regardless of its final score.
reveal() works on both nodes and edges. It should be usable in RETURN, WITH, WHERE, and any other Cypher clause that accepts an entity expression.
If decay is not enabled or the entity is not subject to any scoring-driven visibility suppression, reveal() is a no-op and returns the entity unchanged.
Suppressed properties do not exist as a concept. Properties remain directly queryable in Cypher even when property-level decay excludes them from vectorization or vector-backed retrieval.
Example usage:
MATCH (n:SessionRecord)
RETURN n, decayScore(n) AS entityDecayScore
MATCH (n:SessionRecord)
RETURN n.summary, decayScore(n, {property: 'summary'}) AS summaryDecayScore
MATCH ()-[r:CO_ACCESSED]-()
RETURN r, decayScore(r) AS edgeDecayScore
MATCH ()-[r:CO_ACCESSED]-()
RETURN r.signalScore, decayScore(r, {property: 'signalScore'}) AS signalScoreDecay
MATCH (n:SessionRecord)
RETURN n.summary, n.summary AS stillDirectlyQueryableInCypher
MATCH (n:SessionRecord)
RETURN n, policy(n) AS accessMeta
MATCH (n:SessionRecord)
WHERE policy(n).accessCount >= 5
RETURN n, policy(n).accessCount AS accessCount, policy(n)._lastMutatedAt AS lastAccessed
MATCH ()-[r:CO_ACCESSED]-()
RETURN r, policy(r).traversalCount AS traversals, decay(r) AS decayMeta
// Retrieve a node that may be invisible due to scoring
MATCH (n:SessionRecord {id: $id})
RETURN reveal(n) AS rawNode, decayScore(n) AS score
// Retrieve all suppressed or hidden nodes with their scores for diagnostics
MATCH (n:SessionRecord)
RETURN reveal(n) AS rawNode, decay(n) AS decayMeta, policy(n) AS accessMeta
// Bypass property-level hiding to see all stored properties
MATCH ()-[r:CO_ACCESSED]-()
RETURN reveal(r) AS rawEdge, reveal(r).signalScore AS rawSignal
Compatibility rule:
RETURN n remains Neo4j-compatible and does not automatically inject decay metadata into the node; however, n is subject to scoring-driven visibility — if the entity's score renders it invisible, it will not appear in results unless accessed through reveal(n)
RETURN r remains Neo4j-compatible and does not automatically inject decay metadata into the edge; same visibility rules apply
RETURN reveal(n) or RETURN reveal(r) bypasses scoring-driven visibility and property hiding, returning the raw stored entity
- callers opt in by returning
decayScore(...), decay(...), policy(...), or reveal(...) explicitly as additional columns
- property-level scores are therefore visible to Cypher without changing Bolt node or relationship structures
- missing decay profile should behave like ordinary metadata lookup in Cypher: no error, neutral score
Implementation rule:
- Cypher scoring functions should call the same shared runtime scorer used by unified retrieval scoring
- Cypher options objects should be validated against the accepted keys
property and scoringMode
- supported Cypher
scoringMode values remain: exponential, linear, step, none
- unified retrieval should call the same scorer but should not accept a caller-supplied
scoringMode
6.9 Unified Search Metadata
The unified search service should follow the same derived-on-read model as native Cypher.
It should not persist node-, edge-, or property-level decay scores into stored entity fields. Instead, when requested, it should add resolved scoring metadata into a separate response meta structure.
Unified retrieval scoring should use the same scorer as Cypher scoring functions, but it should remain profile-and-policy-resolved and should not expose the Cypher-only scoringMode override.
The shape should be a keyed object rather than an array of single-entry maps.
Preferred shape:
{
"scores": {
"node-id-12": {
"decay": 0.82,
"properties": {
"property1": { "decay": 0.44 },
"property2": { "decay": 0.91 }
}
},
"edge-id-77": {
"decay": 0.63,
"properties": {
"signalScore": { "decay": 0.28 }
}
}
}
}
Suggested conventions:
- top-level key by entity id
- entity-level score at
scores[id].decay
- property-level scores nested at
scores[id].properties[propertyKey].decay
- optional richer metadata can be added later beside
decay, such as policy, reason, scope, or scoreFrom
- if no policy applies,
decay should be reported as 1.0 unless an explicit configured default policy says otherwise
Suggested retrieval scoring inputs:
- options object with optional
property when scoring needs to target a specific property
- options object may later grow additional explicit keys without breaking call-site semantics
- retrieval callers should not provide
scoringMode; mode selection comes from the resolved decay profile
The existing unified search metadata shape remains unchanged. Promotion-policy effects and score-start-time effects are reflected in the resolved score value rather than through a new response field, though richer metadata may optionally expose the selected scoreFrom.
Suppressed nodes and edges should be excluded from unified retrieval as soon as possible. Property-level exclusions should affect vectorization and vector-backed retrieval only, while stored properties remain directly queryable in Cypher.
When vector search (e.g., db.retrieve, db.index.vector.queryNodes) returns candidates that are subsequently suppressed by decay visibility, the caller may receive fewer results than the requested LIMIT. To address this, the vector search layer should chunk results based on the LIMIT value and continue pulling additional chunks until the original limit is satisfied or the index is exhausted. This ensures that decay-filtered vector search returns the expected number of visible results.
Knowledge-Layer Scoring and Visibility Plan
Status: Proposed
Date: April 15, 2026
Scope: Replace hardcoded Ebbinghaus memory-tier decay behavior with a generic, profile-and-policy-driven decay and scoring system that can support existing, proposed, or future decay models, while expressing promotive declarative tiers through separate promotion profile and policy subsystems, supporting MVCC-aware score-from selection for both nodes and edges, implementing efficient deindexing for visibility-suppressed nodes and edges, persisting
ON ACCESSmutation state in a separate accessMeta index so that nodes and edges remain read-only during policy evaluation, evaluating scoring before query visibility so that invisible entities are suppressed from queries unless accessed throughreveal(), and resolving promotion policies before decay profiles.1. Objective
Implement a flexible decay and scoring architecture in NornicDB where retention behavior is resolved from policies rather than hardcoded cognitive tiers.
The system must support:
Nodes and edges must be treated as first-class decay targets. A node or edge must be able to decay, be scored, be suppressed from retrieval, be removed from indexing, and be promoted using the same policy-driven machinery.
Properties are not suppression targets. Properties may receive decay scores and vectorization-exclusion behavior, but they remain stored in place and remain directly queryable through Cypher.
This plan is intentionally model-agnostic. It is not tied to any one research paper or taxonomy. Although inspired by this research paper which called out NornicDB specifically. https://arxiv.org/pdf/2604.11364
2. Problem Statement
NornicDB currently has memory-decay behavior that depends on fixed tier names and fixed decay assumptions. That makes the system harder to evolve because retention logic is embedded in runtime code rather than expressed declaratively.
That creates six engineering problems:
The system should instead treat decay behavior as configurable retention profiles, promotion behavior as separate configurable scoring profiles and policies, score start time as an explicit profile decision, and deindex cleanup as a dedicated deindex workflow for nodes and edges only.
3. Design Principles
NO DECAYmust be directly expressible in policy definitions.CREATED,VERSION, orCUSTOM.9a. Properties that participate in structural indexes (lookup indexes, range indexes, and composite indexes) are immune to decay scoring, decay hiding, and property-level exclusion. Fulltext indexes and vector indexes are retrieval-surface indexes and do not confer property immunity. Indexed properties must remain stable and always visible because they are relied upon for aggregation, joining, and lookup.
4. Target Architecture
4.1 Decay Profile Layer
Decay profiles are the mechanism that decides whether decay applies, at what rate, at what scope, and from which score start time decay age is measured. Decay profiles are the only decay authoring surface — there is no separate decay policy concept.
Required behavior:
NO DECAYand rate-based decay without relying on fixed tier namesSuggested fit in NornicDB:
4.2 Promotion Layer
Promotion behavior is split into two object types: profiles and policies.
Promotion profiles are named parameter bundles (multiplier, score floor, score cap, scope). They contain no logic and cannot be targeted to entities directly. They are referenced by name inside promotion policy APPLY blocks.
Promotion policies contain logic —
FORtargets,APPLYblocks,WHENpredicates, and optionalON ACCESSmutation blocks. Policies bind profiles to specific node labels, edge types, and property paths. Promotion policies are resolved first, before decay profile resolution.WHENpredicates are evaluated beforeON ACCESSmutations — if the entity is visibility-suppressed (below the visibility threshold),ON ACCESSmutations do not execute. This prevents suppressed entities from accumulating access state they should not have. The promotion adjustments are applied to the base decay score to produce the final score without changing the existing Cypher scoring API.Required behavior:
ON ACCESSmutation blocks that execute when the target is accessed during scoring resolution, but only afterWHENpredicates have been evaluated and only if the entity passes the suppression gate (is not visibility-suppressed);ON ACCESSmutations write exclusively to a separate accessMeta index keyed to the target node or edge, never to the node or edge itselfSuggested fit in NornicDB:
ON ACCESSmutation state per target node or edge asmap[string]interface{}, serialized in msgpack alongside other data files for performance[N]atomic.Int64, N = number of shards, e.g. 64), keyed byhash(entityID) % N; each shard holds a delta, not an absolute value; no msgpack, no Badger write, no allocation; the read path seesstoredValue + pendingDeltavia a single atomic loadlastAccessedAt,lastTraversedAt) are stored asatomic.Int64(UnixNano) in the same shard struct; the flush writes the latest value, not an accumulationn.accessCountseepersisted + buffered deltaby reading through the accumulator, not BadgerON ACCESSblocks andWHENpredicates resolve from accessMeta first, falling back to the node or edge's stored properties4.3 Authoring Subsystem Layer
The authoring subsystem is the surface for declaring decay profiles and promotion profiles and policies.
Required behavior:
Suggested fit in NornicDB:
4.4 Runtime Resolution Layer
The runtime resolution layer converts configuration and profiles into effective decay behavior and final score for a node, edge, or property. Scoring happens before query visibility — a node or edge must be scored before it becomes visible to the query.
Required behavior:
WHENpredicates to determine the entity's promotion tier and whether it is suppressed; executeON ACCESSmutations only if the entity passes the suppression gate (is not visibility-suppressed after promotion and decay resolution)reveal()to bypass scoring-driven visibilityreveal()without decay-driven visibility filtering or property hidingSuggested fit in NornicDB:
CREATED,VERSION, andCUSTOMreveal()bypass path that returns the raw stored entity, skipping scoring-driven visibility and property hiding4.5 MVCC Interaction Layer
MVCC version resolution and decay scoring are separate concerns, but scoring gates query visibility. MVCC determines which version of an entity exists at the transaction snapshot. Scoring then determines whether that version is visible to the query.
Required behavior:
CREATED, where decay age begins at the entity's original creation timestampVERSION, where decay age begins at the latest visible version timestamp under MVCCreveal()to bypass scoring-driven visibility and return the MVCC-resolved version without suppressionSuggested fit in NornicDB:
4.6 Visibility Suppression and Deindex Layer
The visibility suppression and deindex layer is the mechanism that removes suppressed whole nodes and whole edges from indexing in the most performant way possible.
Required behavior:
Suggested fit in NornicDB:
5. Logical Resolution Model
Because decay scores are derived rather than stored on fields, this section describes runtime resolution artifacts and schema objects, not a stored score data model.
5.1 Schema Objects
DecayProfile
Database object used to define reusable decay parameter bundles. Profiles contain no logic — they declare configuration values only.
Minimum fields:
CREATED,VERSION, orCUSTOMCUSTOMPromotionProfile
Database object used to define reusable promotive scoring parameter bundles. Profiles contain no logic — they declare configuration values only.
Minimum fields:
PolicyBackedDecayRule
Logical rule compiled from decay profile definitions and used by the resolver.
Minimum fields:
PolicyBackedPromotionRule
Logical rule compiled from promotion policy definitions and used by the resolver.
Minimum fields:
AccessMeta
Persistent metadata index that stores
ON ACCESSmutation state separately from the node or edge it describes. Each entry is amap[string]interface{}keyed to a target node or edge identifier. AccessMeta entries are serialized in msgpack alongside other data files for performance.Nodes and edges are read-only during
ON ACCESSevaluation. All writes within anON ACCESSblock mutate the target's accessMeta entry, never the target's stored properties. All reads withinON ACCESSblocks andWHENpredicates resolve from the target's accessMeta entry first, falling back to the target's stored properties when the key is not present in accessMeta. Thestored(entity)qualifier may be used inside WHEN predicates and ON ACCESS blocks to force a read from stored node or edge properties, bypassing accessMeta-first resolution.stored()is the escape hatch for properties managed by external processes and is not a general Cypher function.AccessMeta has a fast-path fixed-layout struct for the most common fields (
accessCount int64,lastAccessedAt int64,traversalCount int64,lastTraversedAt int64). Only custom keys fall back to themap[string]interface{}overflow map. The fixed-layout struct serializes to a known-size byte slice with no reflection. msgpack is used only for the overflow map of custom keys. Pre-allocated per-entity byte buffers in the flush goroutine are reused across iterations (sync.Pool or ring buffer).All integer values in accessMeta are normalized to
int64and all floating-point values tofloat64at deserialization time. This normalization ensures that Cypher arithmetic in ON ACCESS and WHEN blocks operates on consistent types —coalesce(n.accessCount, 0) + 1always works because both operands areint64. Boolean values remainbool. String values remainstring.time.Timeis stored asint64(UnixNano) and converted on read.ON ACCESS mutations are not executed as literal Cypher writes on every read. They are accumulated in-process via a sharded counter ring and flushed asynchronously to Badger in batches. See section 4.2 for the accumulator design.
Minimum fields:
map[string]interface{}AccessMeta Lifecycle
When a node or edge is deleted (tombstoned in MVCC), its accessMeta entry is enqueued for deletion in the same transaction. The accessMeta key is deleted immediately from the in-process accumulator and enqueued as a deindex work item alongside any index-entry catalog cleanup.
When a node or edge is suppressed, its accessMeta entry is retained — suppressed entities are still accessible via
reveal(), andpolicy()on a revealed entity should still return its access history. The accessMeta entry is only deleted when the entity is physically reclaimed by the compliance retention lifecycle.When MVCC version pruning removes all versions of an entity, its accessMeta entry is eligible for deletion. The
PruneMVCCVersionsfunction should check for orphaned accessMeta entries and delete them.AccessMeta keys use a dedicated prefix (
prefixAccessMeta) so that orphan detection is a prefix scan bounded to accessMeta, not a full database scan.AccessMeta is included in MVCC snapshot isolation. Updates are atomic but the snapshot is always as of the transaction time.
IndexEntryCatalog
Persistent catalog of exact index entries created for a node or edge.
Minimum fields:
DeindexWorkItem
Persistent background work item used to deindex a visibility-suppressed node or edge.
Minimum fields:
5.2 Derived Runtime Artifacts
ScoringResolution
Derived resolution result produced by the shared resolver for a requested node, edge, or property.
Minimum fields:
DecayResolutionMeta
Derived metadata emitted at read time for Cypher and unified search surfaces.
Minimum fields:
5.3 Design Rule
ON ACCESSmutation state is persisted in a separate accessMeta index keyed per target node or edge, not on the node or edge itselfmap[string]interface{}serialized in msgpack alongside other data files for performanceON ACCESSevaluation; all writes target the accessMeta indexON ACCESSblocks andWHENpredicates resolve from accessMeta first, falling back to stored node or edge propertiespolicy()Cypher function projects accessMeta outward without implying that access-tracking metadata is stored on the node or edge6. Query and Resolution Semantics
6.1 Resolution Rules
Scoring happens before query visibility. When a query touches a node or edge, the engine must resolve and apply promotion and decay scoring before deciding whether the entity is visible to the query. An entity whose final score falls below the visibility threshold or whose decay profile renders it invisible must not appear in
MATCHresults,WHEREevaluation, or search hits unless the caller explicitly usesreveal(entity)to bypass scoring-driven visibility.The resolution order is: promotion first, then decay, then score-start resolution, then visibility determination.
Every scoring-aware read or maintenance operation should resolve the promotion policy first, in this order:
FOR (n:*)orFOR ()-[r:*]-())Then every scoring-aware operation should resolve the decay profile in this order:
FOR (n:*)orFOR ()-[r:*]-())Then every score-aware read should resolve the score start time from the resolved decay profile:
CREATED, if the resolved decay profile declaresCREATEDVERSION, if the resolved decay profile declaresVERSIONCUSTOM, if the resolved decay profile declaresCUSTOMwith ascoreFromPropertypath; the property is resolved from accessMeta first, falling back to stored node or edge properties; if the resolved value is null or unparsable, log a warning and fall back to entity creation timeThen the engine computes the final score and determines visibility:
reveal();ON ACCESSmutations do not execute for suppressed entitiesreveal()If no promotion policy matches, the target should resolve with a neutral promotion effect.
If no decay profile matches, the engine should either treat the target as non-decaying or use an explicit configured default decay profile, but it must not silently assume any legacy tier.
If no score start time matches, the engine should use an explicit configured default. The recommended default is
VERSION.Compiled Binding Tables and Lazy Scoring
The resolution cascade above is the logical model. The implementation pre-flattens it at DDL time using a three-tier optimization strategy.
Tier 1 — Compile-time profile binding table. When a decay profile or promotion policy is created, altered, or dropped, the schema manager builds a direct lookup table:
map[string]*compiledBindingkeyed by label or edge type. EachcompiledBindingholds the resolved decay profile pointer, the resolved promotion policy pointer, the visibility threshold, the score-start mode, and the decay function pointer. Wildcard entries are expanded into per-label/per-type entries at compile time. Resolution at query time is a single map lookup — no cascade. The table is rebuilt on any DDL change, which is rare. For multi-label nodes, the table keys on sorted label sets, not individual labels.Tier 2 — Suppressed-bit fast path. Suppressed entities already have a persisted marker in primary storage. The read path checks the suppressed bit before any profile resolution. If suppressed and the query does not use
reveal(), skip immediately. Cost: one byte check. This eliminates full resolution for the entire suppressed population.Tier 3 — Amortized score computation. For non-suppressed entities with exponential decay, the score is a pure function of
(now - scoreFrom, halfLife). Pre-compute a score threshold timestamp:thresholdAge = -halfLife * ln(visibilityThreshold) / ln(2). At read time, comparenow - scoreFrom > thresholdAgeusing integer subtraction on UnixNano values — nomath.Exp()needed for the visibility check. Only compute the precise float64 score when the entity survives visibility and is projected into results (lazy scoring). This reduces the hot path to one integer comparison per entity. ThethresholdAgeis computed once at compile time per decay profile and stored asint64nanoseconds in the compiled binding.For
ORDER BY decayScore(n), the scorer can use a monotonic proxy:scoreFromTime.UnixNano()itself is monotonically related to the decay score (newer = higher score) for a fixed half-life and function. Sorting byscoreFromTime DESCis equivalent to sorting bydecayScore ASCwithout computing any exponentials. The precise score is only needed if the caller mixesdecayScore()with other expressions in ORDER BY.Multi-Label Node Resolution
If a node has multiple labels (e.g.,
:SessionRecord:MemoryEpisode) and separate decay profiles exist for both labels, the following rules apply:CREATE DECAY PROFILE ... FOR (n:LabelA)is issued, the schema manager checks whether any existing node in the database has both:LabelAand another label that already has a targeted binding. If so, the CREATE fails with: "Conflict: nodes with labels [:LabelA, :LabelB] would match two decay profiles. Create a dedicated profile for the multi-label combination or drop one of the conflicting profiles."FOR (n:SessionRecord:MemoryEpisode). A multi-label target takes precedence over any single-label target.6.2 MVCC Score Start-Time Semantics
The engine should support three profile-declared score start times:
CREATEDVERSIONCUSTOMThese semantics must apply equally to nodes and edges.
CREATEDCREATEDmeans the decay age is measured from the entity's original creation timestamp.Semantics:
CREATEDis the durable, age-from-origin optionVERSIONVERSIONmeans the decay age is measured from the latest visible version timestamp under MVCC.Semantics:
VERSIONis the freshness-from-last-change optionCUSTOMCUSTOMmeans the decay age is measured from a user-specified property value on the entity.Semantics:
scoreFromPropertyoption using accessMeta-first resolution: the property is resolved from the target's accessMeta entry first, falling back to the target's stored node or edge properties only when the key is not present in accessMetaCUSTOMis the operator-defined, domain-specific optionRule
Visibility is always snapshot-based. Only the decay-age start time changes.
The scoring timestamp ("now") is the transaction's MVCC snapshot timestamp for query paths, or the maintenance cycle start time for background paths. The scorer does not call
time.Now(). The scorer receives the snapshot timestamp from the transaction context. This ensures deterministic, repeatable scoring within a transaction: the same entity queried twice in the same transaction returns the same score. The snapshot timestamp is already available in the MVCC read path (MVCCVersion.CommitTimestamp). It is passed through to the scorer asscoringTime. For background maintenance (recalc, suppression pass),scoringTimeistime.Now()at the start of the maintenance cycle, frozen for the duration of the batch.The system must not create new stored versions solely because a derived score changed.
6.3 Property-Level and Edge-Level Semantics
Property-level decay is required for mixed-longevity entities.
Examples:
Profilenode may keepnameandtenantIdpermanently while decayinglastConversationSummaryTaskedge may keep identity and timestamps permanently while decaying a transient confidence fieldDocumentnode may keep canonical content permanently while decaying ranking hints or ephemeral summariesCO_ACCESSEDedge may decay as a whole, even if neither endpoint node decays at the same rateEdge-level decay should support at least these outcomes:
Property decay should support at least these outcomes:
AccessFlusherdetects that a property's score has crossed below its visibility threshold during a flush cycle, it writes an explicitnilfor that property key in the entity'sAccessMetaEntry.Overflowmap and re-queues the node for embedding viaInvalidateManagedEmbeddings+AddToPendingEmbeddings; the embed worker applies accessMeta-first projection over node properties before building embed text — any property that resolves to explicitnilafter projection is excluded from the embed text; when a property's score rises above the threshold (e.g., via promotion), the flusher removes thenilkey from the overflow map and re-queues the node, and the next embed cycle includes the property again; no separate background scorer or persisted suppression list is needed — the overflow mapnilconvention is the suppression signalProperties that participate in structural indexes (lookup indexes, range indexes, and composite indexes) are immune to property-level decay scoring, decay hiding, and vectorization exclusion. These properties must remain stable and always visible to queries because index-backed operations depend on their values being present and consistent. Fulltext indexes and vector indexes are retrieval-surface indexes and do not confer property immunity — property-level decay may exclude a property from a vector index or fulltext search without breaking aggregation or joins. If a decay profile or promotion policy contains a property-level rule that targets a property participating in a structural index, the engine should reject the rule at creation time with a validation error.
Property-level promotion should support at least these outcomes:
Property-level scores should only influence retrieval when the property is directly involved in matching, ranking, reranking, filtering, projection, summarization, vectorization, or vector-backed retrieval. A decayed or promoted property should not silently degrade or improve the score of the entire entity by default.
Edge decay should not be inferred from node decay by default. An edge must be able to decay on its own policy terms even if both endpoint nodes are non-decaying.
Properties are not suppression targets. A property with a low score for vectorization may be excluded from vectorization outputs or vector-backed retrieval, but it remains stored in place and directly queryable in Cypher.
6.4 Suppression Semantics
Visibility suppression applies only to whole nodes and whole edges.
When a node or edge crosses suppression eligibility:
Property-level decay must not cause property suppression, property movement, or property deletion from storage.
If a node remains indexed, its properties remain indexable under ordinary indexing rules. Property-level decay affects retrieval and vectorization behavior, not whether the property exists in storage. Properties that participate in structural indexes (lookup, range, and composite indexes) are entirely immune to decay scoring, decay hiding, and vectorization exclusion — they must remain stable and always visible for aggregation, joining, and lookup. Fulltext indexes and vector indexes are retrieval-surface indexes and do not confer this immunity.
6.5 Decay Function Semantics
The engine should support multiple decay function identifiers over time.
Initial supported scoring modes can include:
exponentiallinearstepnoneThe engine should resolve these as runtime scoring behavior, not as special categories.
These scoring modes should be accepted both:
Cypher may override the profile-resolved scoring mode for the scope of that scoring expression only. Unified retrieval should not expose that override surface and should remain profile-resolved.
6.6 Promotion and Decay Resolution Order
Promotion policies are evaluated first. The promotion policy for the target is resolved and its
WHENpredicates are evaluated before decay profile resolution begins.WHENpredicates determine which promotion profile applies and what tier the entity is in.After promotion resolution, the decay profile is resolved and the base decay score is computed. The promotion adjustments are then applied to the base decay score to produce the final score. The final score determines query visibility.
ON ACCESSmutations execute after the final score and visibility determination. If the entity's final score falls below the visibility threshold (i.e., the entity is suppressed),ON ACCESSmutations do not execute. This prevents suppressed entities from accumulating access state — a suppressed entity should not record "accesses" that only occurred because the scorer was evaluating it, not because a user or query actually retrieved it.ON ACCESSmutations reflect genuine access by a visible entity, not internal scoring housekeeping.The evaluation order is:
WHENpredicates from the matching promotion policy (determines promotion tier)ON ACCESSmutations (increment access counts, set timestamps, etc.)ON ACCESSdeltas to accessMeta asynchronouslyThe normative formula for final score computation is:
Where:
baseDecayScoreis the output of the decay function (e.g.,exp(-t * ln(2) / halfLife))promotionMultiplier,promotionFloor,promotionCapcome from the matched promotion profile (defaults: 1.0, 0.0, 1.0)decayFloorcomes from the decay profile'sDECAY FLOORdirective (default: 0.0)Order of operations: multiply → floor → cap → decay floor. The decay floor is applied last because it is a hard minimum from the decay profile, independent of promotion.
If no promotion policy matches,
promotionMultiplier = 1.0,promotionFloor = 0.0,promotionCap = 1.0, and the formula reduces tomax(baseDecayScore, decayFloor).When multiple
WHENpredicates match within the same promotion policy, the profile with the highest effective multiplier wins. This is deterministic and does not require an explicit composition directive.6.7 Explainability
For any entity or property, the system should be able to explain:
CREATED,VERSION, orCUSTOMand which property path was used ifCUSTOM, whether the value was resolved from accessMeta or stored properties, and whether a fallback to entity creation time occurred due to a null or unparsable value6.8 Native Cypher Access
The decay subsystem should expose scoring through native Cypher functions so callers can inspect resolved scores without altering Neo4j-compatible node or relationship structures.
Proposed functions:
decayScore(entity)returns the effective scalar decay score for a node or edgedecayScore(entity, { scoringMode: 'linear' })returns the effective scalar decay score for a node or edge using the requested scoring modedecayScore(entity, { property: 'summary' })returns the effective scalar decay score for a specific property on that node or edgedecayScore(entity, { property: 'summary', scoringMode: 'step' })returns the effective scalar decay score for a specific property using the requested scoring modedecay(entity)returns a structured decay object for the node or edgedecay(entity, { scoringMode: 'linear' })returns a structured decay object for the node or edge using the requested scoring modedecay(entity, { property: 'summary' })returns a structured decay object for the requested propertydecay(entity, { property: 'summary', scoringMode: 'step' })returns a structured decay object for the requested property using the requested scoring modeThe options-object shape avoids ambiguous string overloads.
propertyandscoringModeare named keys rather than positional string arguments.The structured
decay(...)result should always expose a Cypher-accessible.scorefield so callers can write concise expressions without needing a second helper function when they want richer metadata.Suggested fields on
decay(...)results:scorepolicyscopefunctionvisibilityThresholdfloorappliesreasonscoreFromThe
decay(...)object is a derived value. It should not imply that score metadata is being persisted back onto the node, edge, or property itself.If a caller invokes
decayScore(...)ordecay(...)for a target with no matching policy, the function should return the non-decaying/default result rather than failing. The default scalar should be1.0, and the structured form should report a neutral non-decaying result.The existing Cypher scoring API remains unchanged. The score returned by
decayScore(...)anddecay(...).scoreis the final resolved score after applying the decay profile, the profile-declared score start time, and the matching promotion policy.The promotion policy subsystem should expose accessMeta through a native Cypher function so callers can inspect access-tracking state without altering Neo4j-compatible node or relationship structures.
Proposed function:
policy(entity)returns the accessMeta map for the node or edge as a structured Cypher objectThere is no correlated
policyScore()scalar function. Unlikedecay()/decayScore(), the accessMeta map is a general-purpose key-value store with no single canonical scalar to extract. Callers access individual keys through standard Cypher property access on the returned map, for examplepolicy(n).accessCountorpolicy(r).traversalCount.Suggested fields on
policy(...)results:_targetId: the target node or edge identifier_targetScope:nodeoredge_lastAccessedAt: timestamp of the most recent node access_lastMutatedAt: timestamp of the most recentON ACCESSmutation_mutationCount: total number ofON ACCESSmutations appliedThe
policy(...)object is a derived value read from the accessMeta index. It does not imply that access-tracking metadata is stored on the node or edge itself.If a caller invokes
policy(...)for a target with no accessMeta entry, the function should return an empty map with only the_targetIdand_targetScopefields rather than failing.The scoring subsystem should expose a bypass function so callers can retrieve the raw stored entity without decay-driven visibility filtering or property hiding.
Proposed function:
reveal(entity)returns the raw stored node or edge as it exists in primary storage, bypassing all scoring-driven visibility suppression and property-level decay hidingreveal()is a plan-level visibility bypass marker, not a runtime function. It does not disable scoring — the entity still has a resolved score. It disables the visibility gate that would otherwise hide the entity or its properties from the query result.reveal()is the only mechanism to access entities that are invisible due to scoring. It does not affectdecayScore(),decay(), orpolicy()— those functions still return the resolved values.When the query planner detects
reveal(variable)anywhere in the query (RETURN, WITH, WHERE, or ORDER BY), it marks that variable's binding as visibility-bypassed during plan compilation. A visibility-bypassed binding skips scoring-driven suppression at MATCH time. The entity is always materialized. Its score is still computed (sodecayScore()anddecay()return correct values), but the visibility gate is disabled for that binding. This is equivalent to the planner rewritingMATCH (m:MemoryEpisode) RETURN reveal(m)into a plan wherem's scan does not apply the visibility filter.If
reveal()is used on one variable but not another in the same query, only the revealed variable bypasses visibility. Example:MATCH (m:MemoryEpisode)-[:EVIDENCES]->(k:KnowledgeFact) RETURN reveal(m), k—mbypasses visibility,kdoes not.reveal()with no downstream usage in the query is a no-op (standard dead-code elimination).reveal()wrapping an already-visible entity is a no-op at runtime.When
reveal()is used, the returned entity includes all stored properties, including any that would normally be hidden by property-level decay exclusion. The entity appears in query results regardless of its final score.reveal()works on both nodes and edges. It should be usable inRETURN,WITH,WHERE, and any other Cypher clause that accepts an entity expression.If decay is not enabled or the entity is not subject to any scoring-driven visibility suppression,
reveal()is a no-op and returns the entity unchanged.Suppressed properties do not exist as a concept. Properties remain directly queryable in Cypher even when property-level decay excludes them from vectorization or vector-backed retrieval.
Example usage:
Compatibility rule:
RETURN nremains Neo4j-compatible and does not automatically inject decay metadata into the node; however,nis subject to scoring-driven visibility — if the entity's score renders it invisible, it will not appear in results unless accessed throughreveal(n)RETURN rremains Neo4j-compatible and does not automatically inject decay metadata into the edge; same visibility rules applyRETURN reveal(n)orRETURN reveal(r)bypasses scoring-driven visibility and property hiding, returning the raw stored entitydecayScore(...),decay(...),policy(...), orreveal(...)explicitly as additional columnsImplementation rule:
propertyandscoringModescoringModevalues remain:exponential,linear,step,nonescoringMode6.9 Unified Search Metadata
The unified search service should follow the same derived-on-read model as native Cypher.
It should not persist node-, edge-, or property-level decay scores into stored entity fields. Instead, when requested, it should add resolved scoring metadata into a separate response
metastructure.Unified retrieval scoring should use the same scorer as Cypher scoring functions, but it should remain profile-and-policy-resolved and should not expose the Cypher-only
scoringModeoverride.The shape should be a keyed object rather than an array of single-entry maps.
Preferred shape:
{ "scores": { "node-id-12": { "decay": 0.82, "properties": { "property1": { "decay": 0.44 }, "property2": { "decay": 0.91 } } }, "edge-id-77": { "decay": 0.63, "properties": { "signalScore": { "decay": 0.28 } } } } }Suggested conventions:
scores[id].decayscores[id].properties[propertyKey].decaydecay, such aspolicy,reason,scope, orscoreFromdecayshould be reported as1.0unless an explicit configured default policy says otherwiseSuggested retrieval scoring inputs:
propertywhen scoring needs to target a specific propertyscoringMode; mode selection comes from the resolved decay profileThe existing unified search metadata shape remains unchanged. Promotion-policy effects and score-start-time effects are reflected in the resolved score value rather than through a new response field, though richer metadata may optionally expose the selected
scoreFrom.Suppressed nodes and edges should be excluded from unified retrieval as soon as possible. Property-level exclusions should affect vectorization and vector-backed retrieval only, while stored properties remain directly queryable in Cypher.
When vector search (e.g.,
db.retrieve,db.index.vector.queryNodes) returns candidates that are subsequently suppressed by decay visibility, the caller may receive fewer results than the requested LIMIT. To address this, the vector search layer should chunk results based on the LIMIT value and continue pulling additional chunks until the original limit is satisfied or the index is exhausted. This ensures that decay-filtered vector search returns the expected number of visible results.