Skip to content

Logging framework #32

@ericelliott

Description

@ericelliott

Feature Request: Unified Event Driven Logger for AIDD (aidd/logger)

Summary

Create a unified event driven logging and telemetry framework for client and server in AIDD.

The framework provides dispatch. Its implementation is out-of-scope for this issue.

On client use aidd/client/useDispatch. On server use response.locals.dispatch.
The logger subscribes to the framework event stream and applies per event rules for sampling, sanitization, serialization, batching, and transport.

Client uses localStorage for write through buffering. When online it flushes immediately. When offline it pools buffers and auto flushes on reconnection.


Core design

// Provided by the broader aidd framework (implementation is out-of-scope)
events$            // rxjs Observable of all dispatched events
useDispatch()      // client hook
response.locals.dispatch  // server per request

// Provided by the logger
createLogger(options)

All logging originates from framework dispatch(event).
The logger subscribes to events$ and routes matching events to storage and transport.


Client API

import { createLogger } from 'aidd/logger/client.js';
import { useDispatch } from 'aidd/client/use-dispatch.js';

const { log, withLogger } = createLogger(options);

// dev code uses the framework dispatch
const dispatch = useDispatch();

dispatch(reportPageview({ page: 'home' });

Behavior

  • Monitor online and offline state
  • Write through to localStorage on every log
  • If online then flush immediately in background
  • If offline then append to pooled buffers and auto flush on reconnection
  • Batch events for network efficiency
  • Prefer navigator.sendBeacon with fetch POST fallback
  • Retry with backoff and jitter
  • Evict oldest when storage cap reached

Server API

import { createLogger } from 'aidd/logger/server.js'

const withLogger = createLogger(options)

// withLogger attaches `logger` to
response.locals.dispatch

Behavior

  • No client buffering logic on the server side.

Configuration

createLogger(options)

options
  endpoint            // POST target or internal queue
  payloadSanitizer    // (any) => any
  headerSanitizer     // (headers) => headers
  serializer                // (any) => string
  batchSizeMax = 50
  flushIntervalMs    // background flush tick when online
  consentProvider     // () => { analytics: bool }
  getIds              // () => { userPseudoId requestId }
  level               // default info
  sampler = takeEveryglobal default sampler # Feature Request: Unified Event Driven Logger for AIDD (`aidd/logger`)

## Summary
Create a unified event driven logging and telemetry framework for client and server in AIDD.  
The logger does not create `dispatch`. The framework provides dispatch.  
On client use `aidd/client/useDispatch`. On server use `response.locals.dispatch`.  
The logger subscribes to the framework event stream and applies per event rules for sampling, sanitization, serialization, batching, and transport.  
Client uses localStorage for write through buffering. When online it flushes immediately. When offline it pools buffers and auto flushes on reconnection.

---

## Core design

```sudo
// Provided by the framework
events$            // rxjs Observable of all dispatched events
useDispatch()      // client hook
response.locals.dispatch  // server per request

// Provided by the logger
createLogger(options)

All logging originates from framework dispatch(event).
The logger subscribes to events$ and routes matching events to storage and transport.


Client API

import { createLogger } from 'aidd/logger/client.js'
import { useDispatch } from 'aidd/client/useDispatch.js'

const { log, withLogger } = createLogger(options)

// dev code uses the framework dispatch
const dispatch = useDispatch()

dispatch({ type: 'page_view', payload: { message: 'home', timeStamp: Date.now() } })
log('client ready')

Behavior

  • Monitor online and offline state
  • Write through to localStorage on every log
  • If online then flush immediately in background
  • If offline then append to pooled buffers and auto flush on reconnection
  • Batch events for network efficiency
  • Prefer navigator.sendBeacon with fetch POST fallback
  • Retry with backoff and jitter
  • Evict oldest when storage cap reached

Server API

import { createLogger } from 'aidd/logger/server.js'

const { log, attach } = createLogger(options)

// attach adds logger to response.locals.logger
// per request dispatch is at response.locals.dispatch

Behavior

  • Mirror client API for parity
  • attach({ request, response }) sets response.locals.logger

Configuration

createLogger(options)

options
  endpoint            // POST target or internal queue
  payloadSanitizer    // (any) => any
  headerSanitizer     // (headers) => headers
  serializer          // (any) => string
  batchSizeMin        // default 10
  batchSizeMax        // default 50 or 64k bytes
  flushIntervalMs     // background flush tick when online
  maxLocalBytes       // cap for localStorage pool
  consentProvider     // () => { analytics: bool }
  getIds              // () => { sessionId userPseudoId requestId? }
  clock               // () => Date.now()
  level               // default info
  sampler: observablePipableOperator = takeEvery
  events?              // per event overrides, see below

Per event configuration

createLoggerMiddleware({
  events: {
    [type]: {
      shouldLog = true
      sampler = takeEvery                 // rxjs pipeable operator
      sanitizer = standardSanitizer       // (payload) => payload
      serializer = standardSerializer     // (payload) => string
      level = info
    }
  }
})

Notes

  • sampler can be any rxjs pipeable operator such as takeEvery sampleTime throttleTime bufferTime
  • sanitizer runs before serialization
  • serializer outputs a compact string
  • Missing entries use global defaults

Event envelope and log payload

LogPayload
  timeStamp        // Date.now() at creation
  message          // string
  logLevel         // debug info warn error fatal
  sanitizer        // optional override
  serializer       // optional override
  context          // key map of contextual fields
  props            // additional structured dat
  createdAt? // Time of server injestion

Event
  type             // string
  payload          // LogPayload | any

Client and server enrich with

Enrichment
  schemaVersion = 1
  eventId = cuid2()
  userPseudoId
  requestId
  appVersion
  route
  locale

Security and privacy

Privacy {
  collect minimum required
  no passwords tokens credit cards or sensitive pii
  apply payloadSanitizer and headerSanitizer
  check consentProvider before logging any "non-essential" tracking events, which implies we need a way to mark events essential
  opt out disables non-essential logging

Security (Server side)

POST $eventsRoute[enevtId]
  require method is POST
  require contentType is application/json
  require origin in AllowedOrigins
  require referer origin matches origin
  parse body
  require byteLength <= maxByteLength
  for each event in body.events
    require eventId
    require client timeStamp within skewWindow
    require schema validation
  idempotent by eventID
  enqueue
  respond 204

Example usage

// client
const logger = createLogger({
  endpoint: '/api/events', // versioning is built into the metadata
  events: {
    pageViewed: { sampler: takeEvery, level: info },
    mouseMoved: { sampler: sampleTime(500), level: info },
    error: { sampler: takeEvery, level: error, sanitizer: scrubError }
  }
});

const dispatch = useDispatch();

dispatch({
  type: 'pageViewed',
  payload: { route:  '/'
});

// server
const serverLogger = createLogger({
  endpoint: 'console', // default
  events: {
    httpRequest: { sampler: takeEvery, level: info, sanitizer: scrubRequest },
    httpResponse: { sampler: takeEvery, level: info, sanitizer: scrubResponse },
    error: { sampler: takeEvery, level: error, sanitizer: scrubError }
  }
})

// in route handler
request.locals.dispatch(action creator(action))

Best practices

Structured logs: stable JSON keys for aggregation

Correlation: requestId userPseudoId

Agent instructions

Before implementing, /review this issue description for adherence to best practices (including repo, logging, gdpr compliance, security)

Create a /task epic and save it

Make sure there's a questions section in the task epic if you have any questions

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions