Getting insights from SQLite tracer

Run Stario with SqliteTracer so finished spans land in a SQLite file you can query with ad hoc SQL—handy for local debugging and quick operational questions. The file is not a distributed trace store; treat it as a local sink (Bundled tracers, SqliteTracer).

bash
stario serve main:bootstrap --tracer sqlite

By default the database is stario-traces.sqlite3 in the working directory. The built-in sqlite tracer uses that default unless you pass a custom Tracer factory (<module>:<callable>) where your callable configures path and options.

What gets stored

The spans table holds one row per finished span with duration_ns, status (ok or error), error text when failed, trace_id / parent_id (when the tracer recorded them), request_method, request_path, response_status_code (when the HTTP stack recorded them), and attributes_json for the rest. span_events stores named events; the name exception is used for structured exception records from Span.exception, separate from span status / fail. telemetry_status tracks dropped spans and writer errors.

Schema details live in the framework source (stario.telemetry.sqlite); the queries below assume the default layout. Span links (if you use Span.link) are stored in the span_links table.

HTTP exchanges: root spans are not only HTTP requests—for example startup may record a root span without request_path. For dashboards about requests, filter with request_path IS NOT NULL (and usually request_method IS NOT NULL) so averages and “slowest” lists stay meaningful.

Example queries

Open the file with sqlite3, the SQLite extension in your editor, or any SQL client:

bash
sqlite3 stario-traces.sqlite3

Average request duration (HTTP root spans)

Root work is usually depth = 0. Restrict to HTTP traffic so startup and other non-request roots do not skew the average.

sql
SELECT
  ROUND(AVG(duration_ns) / 1e6, 3) AS avg_ms,
  COUNT(*) AS n
FROM spans
WHERE depth = 0
  AND request_path IS NOT NULL;

Approximate p95 latency (HTTP root spans)

Percentiles are not built into older SQLite builds; this uses an ordered offset—good enough for rough SLO checks when you have enough samples:

sql
SELECT duration_ns / 1e6 AS approx_p95_ms
FROM spans
WHERE depth = 0
  AND request_path IS NOT NULL
ORDER BY duration_ns ASC
LIMIT 1 OFFSET (
  SELECT CASE
    WHEN COUNT(*) = 0 THEN 0
    ELSE MAX(0, CAST((COUNT(*) - 1) * 0.95 AS INTEGER))
  END
  FROM spans AS s2
  WHERE s2.depth = 0
    AND s2.request_path IS NOT NULL
);

Most recent HTTP requests

sql
SELECT
  request_method,
  request_path,
  response_status_code,
  duration_ns / 1e6 AS dur_ms,
  datetime(end_ns / 1e9, 'unixepoch') AS ended_utc
FROM spans
WHERE depth = 0
  AND request_path IS NOT NULL
ORDER BY end_ns DESC
LIMIT 20;

Slowest HTTP requests (by duration)

sql
SELECT
  request_method,
  request_path,
  response_status_code,
  duration_ns / 1e6 AS dur_ms,
  datetime(end_ns / 1e9, 'unixepoch') AS ended_utc
FROM spans
WHERE depth = 0
  AND request_path IS NOT NULL
ORDER BY duration_ns DESC
LIMIT 20;

HTTP 5xx responses

sql
SELECT
  request_method,
  request_path,
  response_status_code,
  duration_ns / 1e6 AS dur_ms,
  datetime(end_ns / 1e9, 'unixepoch') AS ended_utc
FROM spans
WHERE response_status_code >= 500
ORDER BY end_ns DESC
LIMIT 50;

Spans marked failed (status = 'error')

sql
SELECT
  name,
  error,
  request_path,
  datetime(end_ns / 1e9, 'unixepoch') AS ended_utc
FROM spans
WHERE status = 'error'
ORDER BY end_ns DESC
LIMIT 50;

Exception events (with request path when present)

sql
SELECT
  e.name AS event_name,
  s.request_path,
  e.body AS detail,
  datetime(e.time_ns / 1e9, 'unixepoch') AS event_utc
FROM span_events AS e
JOIN spans AS s ON s.span_id = e.span_id
WHERE e.name = 'exception'
ORDER BY e.time_ns DESC
LIMIT 50;

Tracer health (drops and writer errors)

sql
SELECT
  dropped_spans,
  writer_error_count,
  last_writer_error,
  datetime(last_writer_error_at_ns / 1e9, 'unixepoch') AS last_error_utc
FROM telemetry_status
WHERE singleton = 1;

Custom and remote tracers

If you need Datadog, OpenTelemetry, or another sink, implement the Tracer protocol and wire it from the CLI or Server — see Telemetry — Custom tracers.