Why Stario Has Custom Logging

Most logging frameworks optimize for production - aggregating and filtering millions of logs after a service has failed. But during development, the problem is different. You're not analyzing a million requests; you're trying to understand what a single request does right now.

This is where standard logging breaks down.

The Gap Between Production and Development

During development with concurrent requests, standard logging becomes unreadable:

INFO:root:Request arrived for /users
INFO:root:Request arrived for /posts
INFO:root:Validating user ID
INFO:root:Found 5 posts
INFO:root:Querying database
INFO:root:User created successfully
INFO:root:Database query completed

Which lines belong to /users? Which to /posts? With 50 concurrent requests during development, you lose the ability to trace a single request's flow.

The workaround developers use today - adding request IDs to log messages, grep commands, manual filtering - works, but it reveals a gap in what the framework provides. Developers are solving this problem themselves because the logging layer doesn't solve it.

This means developers spend less time in log output and trying to correlate logs, and more time understanding what their code actually does.

Development: Correlation IDs as First-Class

Stario automatically groups logs by correlation ID, making each request's flow visible:

   12:34:56.789 | abc123de | GET     /users/42                        [200] 45.2ms
   12:34:57.123 | fed456ba | POST    /api/create                      [201] 123.5ms
╭─ Request ────────────────────────────────────────────────────────────────────
│  12:34:58.456 | 789abc01 | POST    /process                         [200] 567.8ms
│        +125ms | INFO     | Starting process          | item_count=42
│        +456ms | INFO     | Validation passed
│        +567ms | INFO     | Process complete          | success=true
╰──────────────────────────────────────────────────────────────────────────────

Concurrent requests remain visible, but each one is grouped coherently. You follow a single request's execution without mental overhead. This makes debugging concurrent requests tractable again.

Production: Standard JSON

In production, logging should be no different from any other service. Stario outputs JSON Lines automatically:

{"timestamp":"2024-01-15T12:34:56.789Z","type":"request","method":"GET","path":"/users/42","request_id":"abc123de"}
{"timestamp":"2024-01-15T12:34:56.834Z","type":"log","level":"INFO","message":"Starting process","request_id":"abc123de","context":{"item_count":42}}
{"timestamp":"2024-01-15T12:34:56.900Z","type":"response","status_code":200,"request_id":"abc123de"}

Works with Datadog, ELK, CloudWatch. Standard industry practice - no vendor lock-in.

Smart Buffering: A Design Philosophy

One design choice underlies both modes: successful requests should produce no logs.

Logs are buffered by default and only flushed when something needs investigation - a warning, error, or explicit flush. This means:

  • Development: Only logs related to issues appear, keeping output clean
  • Production: Zero noise from successful requests, full context when failures occur
  • No configuration needed
  • (You can disable buffering if you need to see all logs on success)

Rather than asking "how do I find the problem in a million logs?", the framework assumes "the successful path should be silent, the failing path should be explicit."

One System, Two Contexts

The same logging system handles both - no setup required, TTY detection is automatic:

  • Development (TTY): Clean per-request grouping for understanding request flow
  • Production (non-TTY): Standard JSON for aggregation and analysis
  • Same correlation ID throughout
  • Same buffering behavior throughout

Design Consideration

Different environments have different needs. Production logging is largely solved - the question is how to make development logging actually useful. This design optimizes for where developers spend time: during development. Clean per-request visibility during development, with the same system scaling to production without operational surprises.

See Also