Deployment: Containers, TLS, and safe releases
This guide walks from running Stario with the stock CLI to a minimal container and a TLS front so you can ship without inventing a bespoke stack on day one.
Mental model: Stario listens for plain HTTP on the address you configure. TLS termination (HTTPS to clients, HTTP to the app) and trust headers live at the edge (Caddy, nginx, a load balancer). You still point stario serve at the same MODULE:bootstrap and the same in-process request limits as in staging—only the network path changes.
Stario is not ASGI: there is no uvicorn main:app-style ASGI callable in the framework. You run the Stario CLI (stario serve / stario watch) or construct Server in code—the built-in asyncio HTTP/1.1 stack, not Hypercorn/Uvicorn workers.
1. Start with stario serve and a tracer
The supported path is the same entry you use in development: stario serve MODULE:bootstrap. The CLI builds Server, runs your bootstrap, and blocks until shutdown.
Pick how telemetry leaves the process with --tracer:
json— NDJSON lines on stdout (typical in containers and log aggregators).sqlite— local SQLite for ad hoc inspection.tty— interactive span tree on a terminal (fine on a laptop; rarely what you want in production).auto, or omit--tracer— TTY span tree when stdout is a TTY, otherwiseJsonTracer(what you usually get under Docker without a TTY).
You can also pass a custom factory as <module>:<callable> (zero-argument constructor returning a Tracer) — see Telemetry — Custom tracers.
stario serve main:bootstrap --host 127.0.0.1 --port 8000 --tracer jsonFor day-to-day local work, stario watch is still the right loop; for anything you intend to release, prove the same bootstrap under serve with the tracer you plan to run in production (often json or a custom sink, not tty).
2. Why TLS lives in front of Stario
Stario’s HTTP stack serves plain HTTP/1.1 on the listen socket you configure. It does not terminate TLS for you: no automatic HTTPS, no certificates inside the framework. TLS termination means decrypting HTTPS from browsers at the edge and speaking plain HTTP to Stario on a trusted path. That usually belongs at the edge (Caddy, nginx, a cloud load balancer) or on a sidecar.
Put another way: run Stario behind something that speaks HTTPS to clients and forwards HTTP to your process (or connect over a Unix socket on the same host). That edge also gives you stable Host headers, HTTP/2 or HTTP/3 toward the browser while the upstream to Stario stays HTTP/1.1, which the server implements.
App-side limits (header/body caps, timeouts) and edge limits (rate limits, WAF) answer different questions; the sections below walk Stario’s built-in caps first, then what you typically add in front.
3. Minimal Docker image
The shape is: install your app and dependencies, expose the port you bind, and CMD invokes stario serve. Bind 0.0.0.0 inside the container so traffic from the bridge network reaches the process (the CLI default 127.0.0.1 only accepts local connections).
FROM python:3.14-slim WORKDIR /appCOPY . .RUN pip install uv && uv sync --frozen ENV PYTHONUNBUFFERED=1 EXPOSE 8000CMD ["uv", "run", "stario", "serve", "main:bootstrap", "--host", "0.0.0.0", "--port", "8000", "--tracer", "json"]Adjust COPY / RUN to match your layout (pip install ., pip install -e ., multi-stage builds, non-root user, etc.). A minimal pip-only variant is RUN pip install --no-cache-dir . when you do not use uv in the image. The important part for safety is: one clear CMD, no shell wrapper unless you need it, and the same MODULE:bootstrap you test with TestClient.
4. Caddy in front (TLS + reverse proxy)
Caddy can obtain and renew certificates automatically. Stario stays on HTTP on localhost or a Docker network; Caddy terminates TLS and reverse_proxy to that upstream.
TCP upstream (Stario listening on 127.0.0.1:8000 on the host or another container):
example.com { reverse_proxy 127.0.0.1:8000}Unix socket upstream (Stario started with --unix-socket /run/stario/app.sock — see Server and CLI --unix-socket):
example.com { reverse_proxy unix//run/stario/app.sock}Use a socket path your process can create and that Caddy can read (shared volume, matching user/group, or chmod appropriately). Sockets are a good fit when Stario and Caddy share a host or a tightly coupled pod: no TCP port exposed between them.
Tune proxy timeouts if you use long-lived streams (SSE, Datastar)—the defaults are often fine, but aggressive idle cuts break long GET streams. In-process graceful_timeout (see Runtime — Server) governs how long open connections and app.create_task work can run during shutdown—separate from proxy idle timeouts.
5. Request limits on the Stario side
The HTTP stack enforces size caps on every connection—you do not need to reimplement them in each handler.
| Control | CLI | Default (order of magnitude) | On violation |
|---|---|---|---|
| Request line + headers (total) | --max-request-header-bytes | 64 KiB | 431 Request Header Fields Too Large |
| Request body (buffered) | --max-request-body-bytes | 10 MiB | 413 Payload Too Large |
| Body read stalls between chunks | (fixed in framework) | 30 s between chunks | 408 Request Timeout (slow upload / stalled body) |
Example:
stario serve main:bootstrap --host 0.0.0.0 --port 8000 --tracer json \ --max-request-header-bytes 65536 \ --max-request-body-bytes 10485760The body reader also enforces a slow-read timeout between body chunks (default 30 seconds) so a peer cannot hold an upload open indefinitely by dribbling bytes. Classic slow header attacks are a separate concern—address them at the edge (timeouts, limits) rather than assuming this one knob covers every slow-client pattern. The timeout is fixed in the framework today—it is not a separate CLI flag; if you need a different policy, treat it as a library / fork discussion, not something you toggle per route from the stock CLI.
Response security headers (CSP, HSTS, X-Frame-Options, …) are your responsibility in handlers or middleware—Stario does not inject a default security header bundle.
6. Limits, headers, and timeouts on the reverse proxy
Stario does not rewrite Request.host or scheme from X-Forwarded-* automatically—your app sees the Host header the edge sent. Configure proxies so they forward the host and scheme you expect for URL generation and routing.
Putting Caddy (or nginx, Envoy, a cloud load balancer, …) in front is how you add defense in depth: many teams set stricter limits at the edge than inside the app so garbage never reaches Python.
Request body size — Caddy can cap bodies before they hit Stario (
request_bodywithmax_size, e.g.10MB). Alignmax_sizewith--max-request-body-bytesso behavior is predictable (edge rejects early; Stario still enforces its own cap).Timeouts — Configure read, write, and idle timeouts on
reverse_proxy/ transport so broken clients and pathological uploads do not tie up workers. Keep SSE/Datastar routes in mind: longGETstreams need generous read-side timeouts or route-specific matchers so legitimate streams are not cut off.Security headers at the edge — You can set
Strict-Transport-Security,X-Content-Type-Options,Referrer-Policy,Permissions-Policy, and (if you own the policy)Content-Security-Policyin Caddy’sheaderdirective so every response gets them without duplicating logic in every handler.
Example sketch (adjust names and timeouts to your deployment; check current Caddy docs for your version):
example.com { request_body { max_size 10MB } header { Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" X-Content-Type-Options "nosniff" Referrer-Policy "strict-origin-when-cross-origin" } reverse_proxy 127.0.0.1:8000}Unix socket upstreams use the same idea: size limits and headers apply before traffic reaches the socket.
7. What is not built in (rate limits, DDoS, WAF)
Stario is an application server, not an edge firewall. By default there is no request rate limiting, per-IP quotas, bot detection, or DDoS mitigation in the framework—those belong in front of Stario (Caddy modules or plugins, your CDN / WAF, cloud load balancer rules, or a dedicated reverse proxy tier).
Plan capacity and configuration holistically: process limits, proxy timeouts, body/header caps, TLS, and (when you need it) rate-based or challenge-based controls at the edge. None of that replaces sound app logic (authz, validation, safe defaults)—it complements it.
8. Configuration and environment
Stario does not mandate a single config file format. Common patterns:
Environment variables — read in
bootstrap(os.environ,pydantic-settings, etc.) for database URLs, API keys, and feature flags. Pass them withdocker run -e, Composeenvironment:, or your orchestrator’s secrets.CLI flags —
--host,--port,--unix-socket,--tracer,--max-request-header-bytes,--max-request-body-bytes, and compression knobs are set at process start; mirror what you validated in staging.Bootstrap — keep secrets and clients in
bootstraplifetime; avoid globals that ignore configuration.
Document the exact stario serve … line (or ENTRYPOINT) next to your image tag so releases are reproducible.
Related
Runtime — Server — bind addresses, graceful shutdown,
Server.Getting insights from SQLite tracer — SQL over
SqliteTraceroutput.Telemetry — Custom tracers — vendor
Tracerimplementations.Mapping errors to HTTP responses — stable HTTP mapping in production.
Authentication with cookie sessions — cookies behind HTTPS.