Skip to content

Storage & CQRS Architecture

Candela uses a CQRS (Command Query Responsibility Segregation) storage architecture. This separates write and read concerns into distinct interfaces, enabling flexible multi-sink configurations.

// SpanWriter is a write-only destination for spans.
type SpanWriter interface {
IngestSpans(ctx context.Context, spans []Span) error
Ping(ctx context.Context) error
}
// SpanReader is a read-only source for querying spans and traces.
type SpanReader interface {
GetTrace(ctx context.Context, traceID string) (*Trace, error)
QueryTraces(ctx context.Context, query TraceQuery) (*TraceResult, error)
SearchSpans(ctx context.Context, query SpanQuery) (*SpanResult, error)
GetUsageSummary(ctx context.Context, query UsageQuery) (*UsageSummary, error)
GetModelBreakdown(ctx context.Context, query UsageQuery) ([]ModelUsage, error)
Ping(ctx context.Context) error
}
// TraceStore combines both for backends that support full read/write.
type TraceStore interface {
SpanWriter
SpanReader
}
┌─────────────┐
│ Proxy / │
│ ConnectRPC │
└──────┬──────┘
┌──────▼──────┐
│ Span │
│ Processor │ ← batches spans, applies cost calc
└──────┬──────┘
┌────────────┼────────────┬────────────┐
▼ ▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐
│ DuckDB │ │ BigQuery │ │ Pub/Sub │ │ OTLP │
│ (Writer │ │ (Writer │ │ (Writer │ │ (Writer │
│ + Reader)│ │ + Reader)│ │ Only) │ │ Only) │
└──────────┘ └──────────┘ └──────────┘ └──────────┘

Best for: Local dev, edge deployments, single-server production.

storage:
backend: "duckdb"
duckdb:
path: "candela.duckdb"
  • Write API: DuckDB Appender (columnar batch insert)
  • Attributes: ARRAY<STRUCT<key VARCHAR, value VARCHAR>>
  • No PRIMARY KEY — OLAP convention, duplicates rare

Best for: Lightweight development, embedded testing.

storage:
backend: "sqlite"
sqlite:
path: "candela.db"
  • Driver: modernc.org/sqlite (pure Go, CGO-free)
  • Attributes: JSON-serialized TEXT column

Best for: Production at scale, serverless analytics.

storage:
backend: "bigquery"
bigquery:
project_id: "my-gcp-project"
dataset: "candela"
table: "spans"
location: "US"
  • Write API: Streaming insert with dedup keys
  • Partitioning: start_time by DAY
  • Clustering: (project_id, trace_id)
  • Auth: Application Default Credentials
sinks:
pubsub:
enabled: true
project_id: "my-gcp-project"
topic: "candela-spans"

Forward traces to any OpenTelemetry-compatible backend:

sinks:
otlp:
enabled: true
endpoint: "http://localhost:4318/v1/traces"
protocol: "http"
compression: "gzip"

All backends share the same logical schema:

ColumnDuckDBBigQuerySQLite
span_idVARCHARSTRINGTEXT
trace_idVARCHARSTRINGTEXT
parent_span_idVARCHARSTRINGTEXT
nameVARCHARSTRINGTEXT
gen_ai_modelVARCHARSTRINGTEXT
gen_ai_providerVARCHARSTRINGTEXT
gen_ai_input_tokensBIGINTINT64INTEGER
gen_ai_output_tokensBIGINTINT64INTEGER
gen_ai_cost_usdDOUBLEFLOAT64REAL
attributesSTRUCT[]STRUCT[]TEXT (JSON)
  1. Create pkg/storage/mybackend/mybackend.go
  2. Implement storage.SpanWriter (minimum) or storage.TraceStore (full)
  3. Add config struct and initStorage case in cmd/candela-server/main.go
  4. For write-only sinks, add to the sinks config section