Logging ¶
Stario includes a structured logging system designed specifically for observability in hypermedia-driven applications. It automatically tracks request IDs, buffers logs intelligently, and adapts output format based on environment.
Why custom logging? Stario uses its own logging approach instead of Python's standard
loggingmodule to provide request lifecycle awareness and reduce noise in production. See Why Stario Has Custom Logging for the complete reasoning behind this design.
Architecture Overview ¶
LogQueue (buffered, async flushing)
↓
[Sink 1] [Sink 2] [Sink N] (RichConsoleSink, JSONSink, CustomSink)
↓ ↓ ↓
stdout file database
Three log record types flow through the system:
- RequestLogRecord - Request arrival (method, path, client)
- RouteLogRecord - During request handling (your application logs)
- ResponseLogRecord - Request completion (status code, exceptions)
All records include a request ID (also called correlation ID) to trace requests end-to-end.
Development vs Production Logging (defaults) ¶
Stario automatically selects the appropriate logging sink based on whether stdout is a TTY (terminal):
Development (TTY) ¶
When running locally in a terminal:
python -m uvicorn app.main:app
# or
python main.py
You get RichConsoleSink:
- Pretty-printed with colors
- Request grouping by request ID
- Live updates for in-progress requests
- Relative timestamps
- Human-readable output
12:34:56.789 | abc123de | GET /users/42 [200] 45.2ms
12:34:57.123 | fed456ba | POST /api/create [201] 123.5ms
╭─ Request ────────────────────────────────────────────────────────────────────
│ 12:34:58.456 | 789abc01 | POST /process [200] 567.8ms
│ +125ms | INFO | Starting process | item_count=42
│ +456ms | WARNING | Rate limit approaching | remaining=10
│ +567ms | INFO | Process complete | success=true
╰──────────────────────────────────────────────────────────────────────────────
Production (Non-TTY) ¶
When running in Docker, systemd, or via process managers:
docker run myapp
# or
systemctl start myapp
You get JSONSink:
- JSON Lines format (one JSON object per line)
- UTC timestamps
- Structured for log aggregation systems (ELK, Datadog, CloudWatch, etc.)
- Optimized for parsing and filtering
{"timestamp":"2024-01-15T12:34:56.789Z","type":"request","method":"GET","path":"/users/42","request_id":"abc123de","client":{"host":"192.168.1.1","port":54321}}
{"timestamp":"2024-01-15T12:34:56.834Z","type":"response","status_code":200,"request_id":"abc123de"}
Why this distinction?
In practice (most of the time), TTY detection means:
- Local development → Rich, human-friendly output
- Production (Docker/cloud) → Machine-parseable JSON
Request ID (Request ID Correlation) ¶
Every request gets a unique UUID request ID, stored in scope["stario.request_id"]. This ID appears in:
- RequestLogRecord (request arrival)
- RouteLogRecord (all application logs)
- ResponseLogRecord (request completion)
It enables you to trace a single request through all logs, across services, through load balancers, into databases, etc.
{"request_id":"abc123de","timestamp":"...","type":"request","method":"GET","path":"/api/users"}
{"request_id":"abc123de","timestamp":"...","type":"log","level":"INFO","message":"Fetching user"}
{"request_id":"abc123de","timestamp":"...","type":"log","level":"INFO","message":"User found"}
{"request_id":"abc123de","timestamp":"...","type":"response","status_code":200}
Search for "abc123de" and you see the entire request lifecycle.
Background Logs ¶
Background logs are logs that happen AFTER the response has started. This includes:
- Logs in streaming handlers after the first
yield - Background tasks triggered during request handling
- Cleanup operations
They're tracked via scope["stario.response_started"] and marked with background=True.
Why track this?
In streaming responses, the response starts before your handler finishes. Logs after that point can't affect the response status/headers, so it's useful to know which logs are "post-response". Additionally, Detached Commands run in background after the response is sent, so it's useful to know which logs are from validation phase and which are from background execution.
In RichConsoleSink: Background logs are displayed in a separate section:
╭─ Request ────────────────────────────────────────────────────
│ 12:34:58.456 | 789abc01 | GET /stream [200] ongoing
│ +125ms | INFO | Starting stream
│ +456ms | INFO | First chunk sent
├─ Background ────────────────────────────────────────────────
│ +567ms | INFO | Second chunk sent (background)
│ +789ms | INFO | Stream complete (background)
╰──────────────────────────────────────────────────────────────
RouteLogger ¶
This is your application logger—the one you use in handlers and dependencies via dependency injection.
Using RouteLogger via Dependency Injection ¶
The recommended way to get a logger is via dependency injection:
from stario.logging import Logger
async def my_handler(logger: Logger):
logger.info("Processing request")
try:
result = await do_something()
logger.info("Success", result_id=result.id)
return render_success(result)
except ValueError as e:
logger.error("Validation failed", error=str(e))
raise HTTPException(status_code=400)
The Logger type alias is:
Logger = Annotated[RouteLogger, get_route_logger]
Log Levels ¶
Stario uses a LogLevel enum for level-based filtering:
from stario.logging.types import LogLevel
# Available levels (in order of severity)
LogLevel.DEBUG # 10 - Detailed diagnostic information
LogLevel.INFO # 20 - General informational messages (default)
LogLevel.WARNING # 30 - Warning messages, triggers buffer flush
LogLevel.ERROR # 40 - Error messages, triggers buffer flush
LogLevel.CRITICAL # 50 - Critical errors, triggers buffer flush
By default, loggers use min_level=LogLevel.INFO, so DEBUG logs are ignored.
Available Methods ¶
debug(message, **context)- Debug level (ignored by default unless min_level adjusted)info(message, **context)- Info level (buffered by default)warning(message, **context)- Warning level (triggers flush of buffered logs)error(message, **context)- Error level (triggers flush of buffered logs)critical(message, **context)- Critical level (triggers flush of buffered logs)exception(message, **context)- Error level with automatic exception traceback capturebind(**context)- Clone logger with additional contextflush()- Manually flush buffer and switch to pass-throughdiscard()- Discard buffered logs without writing
Threshold-Based Buffering: Execution Flow ¶
RouteLogger uses threshold-based buffering controlled by two levels:
min_level(default:INFO) - Logs below this level are ignored (e.g.,DEBUGlogs are dropped by default)threshold_level(default:INFO) - When a log at or above this level is logged, all buffered logs are flushed
How it works:
async def process_user(user_id: int, logger: Logger):
# min_level=INFO, threshold_level=INFO by default
logger.info("Starting process", user_id=user_id) # Buffered (INFO < threshold)
logger.info("Loading data", user_id=user_id) # Buffered
logger.debug("Debug info", extra="data") # DROPPED (DEBUG < min_level)
logger.info("Validating", user_id=user_id) # Buffered
if not valid:
logger.warning("Invalid data", user_id=user_id) # ← Triggers flush!
# All 3 buffered INFO logs are now written (marked buffered=True)
# Plus this WARNING log
If no warning/error occurs (request succeeds):
async def process_user(user_id: int, logger: Logger):
logger.info("Starting process", user_id=user_id) # Buffered
logger.info("Loading data", user_id=user_id) # Buffered
logger.info("Processing", user_id=user_id) # Buffered
# Request completes successfully
# → Buffered logs are discarded (never written to any sink)
This dramatically reduces log volume in production—successful requests produce no logs, while errors automatically flush all context.
Context Binding ¶
Add contextual information that automatically appears in all subsequent logs:
async def handler(user_id: PathParam[int], logger: Logger):
# Bind context to logger
logger = logger.bind(user_id=user_id, operation="process")
logger.info("Started") # Includes user_id and operation
await do_work()
logger.info("Completed") # Still includes context
Manual Control ¶
Force flush buffered logs:
logger.info("Debug info 1")
logger.info("Debug info 2")
logger.flush() # ← Manually flush
# All buffered logs are now written
Discard buffered logs:
logger.info("Tentative operation")
if not_needed:
logger.discard() # ← Throw away buffered logs
Timed Operations ¶
Measure and log operation duration using the timed() context manager:
import asyncio
from stario.logging.types import LogLevel
async def expensive_operation(logger: Logger):
# Use as a context manager to measure elapsed time
with logger.timed("Fetching from API", level=LogLevel.INFO):
await asyncio.sleep(2.0) # Simulate work
# Logs: "Fetching from API" with duration in seconds
# Custom duration key name
with logger.timed("Database query", duration_key="query_time"):
await asyncio.sleep(0.5)
# Logs: "Database query" with query_time=0.5
The logged output includes a duration field (or custom key) with the elapsed time in seconds.
LogQueue (Infrastructure) ¶
LogQueue is the central hub that collects log records from all loggers, buffers them, and periodically flushes them to sinks (output destinations).
Key features:
- Buffered writes - Collects records and writes in batches for performance
- Automatic flushing - Background task flushes at intervals (default 50ms)
- Size-based flushing - Flushes immediately when buffer reaches threshold (default 100 records)
- Multiple sinks - Can write to multiple destinations simultaneously
- Default sink selection - Automatically chooses
RichConsoleSink(TTY/dev) orJSONSink(non-TTY/prod)
Configuration:
from stario import Stario
from stario.logging.queue import LogQueue
from stario.logging.sinks import JSONSink
from stario.logging.types import LogLevel
# Custom configuration
log_queue = LogQueue(
sinks=[JSONSink()], # Custom sinks list
buffer_size=100, # Flush when buffer reaches this size
flush_interval=0.05, # Flush every 50ms (default)
)
app = Stario(log_queue=log_queue)
Default behavior (no configuration needed):
- TTY detected → Uses RichConsoleSink (colored console output)
- Non-TTY (Docker/systemd) → Uses JSONSink (structured JSON output)
Sinks ¶
Sinks are outputs for log records. You can have multiple sinks writing simultaneously (e.g., console + file + database).
Built-in Sinks ¶
RichConsoleSink:
- Human-friendly output with colors and formatting
- Groups logs by request correlation ID
- Shows live updates for in-progress requests
- Displays relative timestamps within requests
- Separates background logs (after response started)
JSONSink:
- JSON Lines format (newline-delimited JSON objects)
- UTC timestamps
- Optimized for log aggregation systems
- Compact, machine-parseable
Custom Sinks ¶
You can write your own sinks for custom destinations (databases, external services, files, etc.):
Sink Protocol:
from stario.logging.types import Sink, Record, RequestLogRecord, RouteLogRecord, ResponseLogRecord
class CustomSink:
"""Implement this protocol to create a custom sink"""
def open(self) -> None:
"""Called when LogQueue starts - initialize resources"""
pass
def close(self) -> None:
"""Called when LogQueue stops - cleanup resources"""
pass
async def write(self, records: list[Record]) -> None:
"""Called with batches of log records to write"""
for record in records:
if isinstance(record, RequestLogRecord):
# Handle request log (method, path, client, request_id)
print(f"Request: {record.method} {record.path}")
elif isinstance(record, RouteLogRecord):
# Handle route log (message, level, context, buffered, background)
print(f"[{record.level.name}] {record.message} {record.context}")
elif isinstance(record, ResponseLogRecord):
# Handle response log (status_code, exc_info, request_id)
print(f"Response: {record.status_code}")
Required methods:
open()- Initialize connections/resources (called on startup)close()- Clean up resources (called on shutdown)write(records: list[Record])- Async method to write batches of records
See Creating Custom Logging Sinks for complete examples including database sinks, file sinks, and external service integrations.
Multiple sinks example:
from stario import Stario
from stario.logging.queue import LogQueue
from stario.logging.sinks import RichConsoleSink, JSONSink
log_queue = LogQueue(
sinks=[
RichConsoleSink(), # Console for local viewing
JSONSink(), # JSON Lines for log aggregation
MyCustomSink(), # Your custom sink
]
)
app = Stario(log_queue=log_queue)
AccessLogger ¶
Used internally by GuardianMiddleware to log request/response lifecycle.
What it logs:
- Request arrival - Method, path, client IP, request ID
- Request completion - Status code, exception info (if any), request ID
You typically don't use AccessLogger directly—it's handled automatically by the framework. These logs appear in your console/logs without any code on your part.
Buffered Logs ¶
Buffered logs are logs that were held in the buffer and only written when the threshold was triggered. They're marked with buffered=True.
In RichConsoleSink: Indicated with an asterisk:
╭─ Request ────────────────────────────────────────────────────
│ +10ms | *INFO | Operation started (buffered)
│ +20ms | *INFO | Validation passed (buffered)
│ +30ms | WARNING | Rate limit approaching (trigger)
╰──────────────────────────────────────────────────────────────
The asterisk shows these logs were delayed—they happened earlier but weren't written until the WARNING triggered the flush.
Best Practices ¶
- Use dependency injection to get the logger
- Add context with
.bind()for better traceability - Let threshold buffering work—don't flush unnecessarily
- Log structured data as kwargs, not in the message:
# Good
logger.info("User created", user_id=user.id, email=user.email)
# Bad
logger.info(f"User {user.id} created with email {user.email}")
- Use request IDs to trace requests through logs
- Don't log sensitive data (passwords, tokens, PII)
Logging Exceptions ¶
All exceptions are automatically logged by GuardianMiddleware with:
- Correlation ID (request ID) - for tracing
- Stack trace - for debugging
- Request details (method, path, client IP)
- Response status code
- Whether response had already started (streaming errors)
The request ID allows you to search logs and see the entire request lifecycle, including what led to the exception.
Exception Log Format ¶
During development exceptions are logged to the console with a rich console output, this should help you debug the exception quickly. When using JSONSink (production), exception logs include:
ResponseLogRecord with exception:
{
"timestamp": "2024-01-15T12:34:56.789Z",
"type": "response",
"status_code": 500,
"request_id": "abc123de-4567-89f0-1234-567890abcdef",
"exception": {
"type": "ValueError",
"message": "Invalid user ID",
"traceback": "Traceback (most recent call last):\n File \"handler.py\", line 42, in get_user\n raise ValueError(f\"Invalid: {user_id}\")"
}
}
RouteLogRecord with exception (in exception handler context):
{
"timestamp": "2024-01-15T12:34:56.800Z",
"type": "log",
"level": "ERROR",
"message": "Failed to process user",
"buffered": false,
"background": false,
"request_id": "abc123de-4567-89f0-1234-567890abcdef",
"context": {"user_id": 42},
"exception": {
"type": "ValueError",
"message": "Invalid user ID",
"traceback": "Traceback..."
}
}
RouteLogRecord without exception (normal logging):
{
"timestamp": "2024-01-15T12:34:56.750Z",
"type": "log",
"level": "INFO",
"message": "User created",
"buffered": true,
"background": false,
"request_id": "abc123de-4567-89f0-1234-567890abcdef",
"context": {"user_id": 42, "email": "alice@example.com"}
}
The fields included:
- type - Record type:
"request","response", or"log" - timestamp - ISO 8601 UTC timestamp
- request_id - Unique request ID, appears in all logs for this request
- status_code - (Response only) HTTP response code (200, 404, 500, etc.)
- level - (Log only) Log level: DEBUG, INFO, WARNING, ERROR, CRITICAL
- message - (Log only) The log message
- buffered - (Log only) Whether this log was held in buffer before flushing
- background - (Log only) Whether this log occurred after response started
- context - (Log only) Structured context data passed as kwargs
- exception - (If present) Nested exception object with
type,message,tracebackfields
See Also ¶
- Exceptions - How exceptions are caught and handled
- Middlewares - GuardianMiddleware which logs all exceptions
- Understanding Request IDs - Design decisions about why Stario has custom logging
- Creating Custom Logging Sinks - Database, file, and external service examples
