Skip to content

candela-server

candela-server is the full-featured Candela backend — API server, LLM proxy, span ingestion, and storage, packaged with a Next.js dashboard UI.

FeatureDescription
LLM ProxyMulti-provider routing (OpenAI, Gemini, Anthropic via Vertex AI)
Span IngestionCQRS storage with DuckDB, SQLite, BigQuery backends
Cost EngineReal-time token counting and cost calculation
Budget EnforcementPer-user spending limits via Firestore
DashboardNext.js UI for traces, costs, and admin
AuthFirebase Auth, Google ID tokens, OAuth2 access tokens

The server supports three authentication strategies, tried in order:

StrategyUsed ByToken Source
Firebase ID TokenBrowser UIFirebase JS SDK
Google ID TokenService accountsidtoken.NewTokenSource()
OAuth2 Access Tokencandela-local (user ADC)gcloud auth application-default login

The production container runs both the Go backend and Next.js UI:

entrypoint.sh starts:
1. Go backend (port 8181, background)
2. Next.js standalone (port 3000, foreground)
Next.js rewrites:
/proxy/* → localhost:8181
/candela.v1.* → localhost:8181
/healthz → localhost:8181
Terminal window
# Build
gcloud builds submit --project $PROJECT -f deploy/cloudbuild.yaml .
# Deploy
gcloud run services update candela --project $PROJECT --region $REGION \
--image $REGION-docker.pkg.dev/$PROJECT/candela/candela-server:latest
# Infrastructure
cd terraform && terraform apply
FileResources
cloud_run.tfCloud Run service, IAM
firebase.tfFirebase project, Identity Platform
bigquery.tfDataset + spans table (time-partitioned)
firestore.tfFirestore database
iam.tfService account + role bindings
artifact_registry.tfContainer image registry