User uploads and storage
StaticAssets is for build-time assets you ship with the app—immutable, fingerprinted, safe to cache aggressively. User uploads are different: visitor-supplied bytes that need storage, naming, metadata, and access control.
Stario does not ship a standard upload class or storage abstraction; the shape depends on your product (object storage vs disk, streaming vs buffered bodies, auth, retention, …). This page sketches handler-level read and write patterns; validation, auth, and storage layout stay in your code.
Prefer object storage for larger files
If uploads are large or frequent, routing every byte through your Python process is usually the wrong bottleneck. The common pattern is: your app issues a signed URL (or short-lived upload credential), the browser puts the object straight to S3 or another bucket, and your backend only stores the object key (and metadata) after the fact—or the client tells you when the upload finished. That keeps memory, timeouts, and worker occupancy predictable; see also Deployment for timeouts and edge behavior.
Typical flow (provider-specific): issue a short-lived, scoped upload URL or credential; the client puts the object in the bucket (PUT/POST per API); your app stores the key and metadata after you know the upload finished—HEAD/size check, provider callback, or an idempotent “finalize” step if the client retries. Browser uploads to another origin usually need CORS, HTTPS-only URLs, and tight TTLs.
If you store files locally
You still need a policy only your app can define:
Names — Using the client’s filename as the on-disk name collides the moment two users upload
photo.jpg. Anything that maps “logical id → path” without collisions usually means your own naming scheme (UUIDs, content hashes, per-user directories) and often a database row for metadata and authorization.Serving back — A download handler reads the bytes you stored, sets an accurate
Content-Type(for example withmimetypes.guess_typein the standard library), and responds. SetContent-Disposition: inlinewhen the browser should render the file in the page, andattachment(with a safefilenamewhen you expose one) when you want a download prompt; see RFC 6266 for filename encoding.
Server limits on request bodies
Uploads are subject to Stario’s maximum body size and read behavior whether you use Request.body() or Request.stream(): both paths count bytes against the same cap and the same slow-read rules (Request).
await c.req.body() buffers the entire request body in memory (up to the configured maximum). Use Request.stream() when you want to read the upload in chunks without holding the full payload in RAM.
Raise the body and header caps with stario serve (for example --max-request-body-bytes, --max-request-header-bytes); defaults and CLI examples are in Stario-side limits. The stall between body chunks (slow upload) timeout is fixed in the framework today—it is not a separate stario serve flag; see the same section for details. Match your reverse proxy so limits are predictable end-to-end. Status codes for those failures (413, 408) come from the HTTP stack as described there; to map your own exceptions to HTTP responses in handlers, see Mapping errors to HTTP responses.
| What | Default | Typical response if exceeded |
|---|---|---|
| Request body size | 10 MiB | 413 Payload Too Large |
| Stall between body chunks | 30 s | 408 Request Timeout (slow upload) |
Minimal upload and download handlers
The framework gives you Context, Request.body() or Request.stream(), stario.responses, and Writer when you need streaming or a precise Content-Type on bytes.
For short acknowledgements and errors, use helpers such as responses.text instead of calling Writer.respond with raw Content-Type bytes. For a buffered binary download with a guessed MIME type, w.respond sets Content-Type and Content-Length for the body—there is no responses.file in the stock helpers. The second argument to respond is the raw Content-Type bytes (for example mime.encode("ascii")). With default headers, respond may also negotiate Content-Encoding for compression; see Compression under Writer.
import mimetypesfrom pathlib import Path from stario import Context, Writer, responses UPLOAD_DIR = Path("var/uploads")UPLOAD_DIR.mkdir(parents=True, exist_ok=True) async def upload_raw_body(c: Context, w: Writer) -> None: data = await c.req.body() # Illustrative path only — use unique keys, a single trusted upload root, and validation. dest = UPLOAD_DIR / "myfile.bin" dest.write_bytes(data) # Sync disk write; see streaming example for cooperative disk IO. responses.text(w, "ok") async def download_stored_file(c: Context, w: Writer) -> None: path = UPLOAD_DIR / "myfile.bin" if not path.is_file(): responses.text(w, "Not found", status=404) return mime = mimetypes.guess_type(path.name)[0] or "application/octet-stream" w.respond(path.read_bytes(), mime.encode("ascii"))Register these handlers from your app bootstrap or route table like any other route—see Structuring larger applications.
Streaming upload and download
If the upload is large enough that you want to avoid holding the whole body in memory, read the request with Request.stream() and write each chunk to disk instead of calling Request.body(). For large downloads, open the stored file and write chunks through Writer after write_headers. Set Content-Length from the file size before write_headers when you know the size so the response uses a fixed-length body instead of chunked encoding (Writer).
import mimetypesfrom pathlib import Path from stario import Context, Writer, responses UPLOAD_DIR = Path("var/uploads")UPLOAD_DIR.mkdir(parents=True, exist_ok=True) async def upload_stream_to_file(c: Context, w: Writer) -> None: dest = UPLOAD_DIR / "myfile.bin" # Sync open/write blocks the event loop; use e.g. aiofiles (async read/write) or asyncio.to_thread with the same loop structure if you need cooperative disk IO under load. with dest.open("wb") as f: async for chunk in c.req.stream(): f.write(chunk) responses.text(w, "ok") async def download_stream_file(c: Context, w: Writer) -> None: path = UPLOAD_DIR / "myfile.bin" if not path.is_file(): responses.text(w, "Not found", status=404) return mime = mimetypes.guess_type(path.name)[0] or "application/octet-stream" size = path.stat().st_size w.headers.set("Content-Type", mime) w.headers.set("Content-Length", str(size)) w.write_headers(200) # Same idea for reads: aiofiles / to_thread if blocking disk matters. with path.open("rb") as f: while True: chunk = f.read(65_536) if not chunk: break w.write(chunk) w.end()Start with either a full read or a stream for a given request—do not call body() after you have begun iterating stream(), or the other way around (Request).
The examples use a fixed filename and paths for clarity—they are not safe for concurrent users or production routing: real code uses opaque ids, one upload root, auth, and collision-proof destinations.
Validators such as ETag, Last-Modified, conditional 304, and Cache-Control are ordinary HTTP—set headers in your handlers as needed; use responses.empty when the response body must be empty.
Multipart forms and Datastar file signals
Wire format: classic HTML forms and many APIs use multipart/form-data or a raw body (the sections above); Stario gives you the bytes either way. Datastar-driven UIs often send files as signals instead—see Reading and writing signals — file uploads.
If the client sends multipart/form-data, Stario exposes the raw request—you parse parts with a library you choose (for example python-multipart) and enforce limits yourself (Request). The whole request still counts toward Stario’s body size cap and proxy limits; for large files prefer a parser that can stream parts instead of buffering the entire body in memory.
Never trust client-supplied paths or filenames for server-side opens. Map opaque ids to paths under a single vetted root; reject .. and unexpected roots, cap sizes, and restrict MIME types or extensions to what your product allows—before you write to disk or return a file to another user.
Related
Structuring larger applications —
bootstrap, mounts, and where shared paths live.Mapping errors to HTTP responses —
HttpException,on_error, predictable 4xx/5xx bodies when your code maps failures to HTTP.Reading and writing signals — Datastar file signals and multipart notes.
Testing with TestClient — posting
multipart/form-datain tests.