The go-to architecture

This page walks through the architecture Stario pairs with Datastar when you build live, hypermedia-heavy UIs—CQRS-shaped HTTP, a long-lived SSE subscription, actions that often return quickly while work finishes in the background, a small Relay for in-process fan-out, and signals as a thin client-side mirror—not a second source of truth. Not every Stario app needs Datastar; for values and escape hatches, see The Stario way. Read this page top to bottom once; later sections build on earlier ones.

Datastar is the browser runtime: declarative data-* attributes, signals, fetches, and applying HTML patches from the server. The stario.datastar package is the Python side—helpers for attributes, read_signals, and SSE framing (Datastar). It is not a second server; it pairs HTML you render in Python with Datastar in the tab.

What CQRS is (and why it is central here)

CQRS (Command Query Responsibility Segregation) names a simple split: commands change system state; queries read state and answer “what should we show now?” The two are intentionally different operations—different validation, different scaling, different failure modes.

That separation matters for Stario + Datastar because writing (handling a user intent, mutating persistence, validating business rules) is not the same job as showing the current state (building the HTML or patches the user should see after those rules run). Your handlers can keep that distinction clear: mutate in one path, re-query authoritative state when you need to render.

CQRS with Datastar: the loop

In a typical live UI you end up with a circle:

  1. Initial document — GET /page returns HTML for the first paint (signals, links, static structure).

  2. Subscription — the page immediately opens GET /subscribe (or similar) as Server-Sent Events. That connection’s job is to push updates into the document for as long as the tab stays open.

  3. Actions — user intent goes out as POST, PATCH, DELETE (or GET when that matches your cache rules). In many apps you want the handler to return quickly—often 204 No Content—and run the real work in the background (app.create_task, a queue worker, etc.).

  4. Notify — when work finishes (or when shared state changes), something must tell subscribers there is new truth to show. In Stario, Relay.publish is the usual in-process hook (Relay).

  5. Re-render path — the code that handles /subscribe (or the loop it triggers) re-reads state from your database, cache, or services, re-renders HTML, and sends patches over SSE.

So the high-level flow is:

state → action → relay → subscribe → new state

Two parallel lanes, not a strict pipeline:

  • Query laneGET /subscribe (or similar) stays open; each notification is a reason to re-read authoritative state and render or patch. Ordering of notifications can differ from ordering of background work.

  • Command lane — HTTP actions and app.create_task / queue work mutate state; Relay.publish (or similar) nudges subscribers that truth may have changed.

The subscribe side is still a query: it answers “what should the UI look like now?” The action side is the command lane. Keeping that split explicit avoids mixing “I processed a click” with “here is the full new page” in one ad hoc response—unless you want that for a specific route.

This doc uses CQRS-shaped language for that command/read split over HTTP—it is not claiming event-sourced write models or separate read databases unless you add them.

The /subscribe endpoint: SSE, efficiency, and patches

/subscribe is usually a long-lived SSE response: one request, many events, each carrying a fragment of work for the client—often HTML to merge into the DOM via Datastar’s morph pipeline (patch_elements, Writer). Use w.alive(...) so the handler stays tied to the connection lifecycle (disconnect, shutdown) for the whole stream.

Why SSE instead of WebSockets (for this shape)? For server-originated push of text events, SSE is simple: HTTP semantics, works through many proxies, reconnect story is standardized. WebSockets are heavier when you do not need a full duplex binary pipe; Stario’s docs and examples lean on SSE plus HTTP actions first.

Optional transport detail: on HTTP/2 or HTTP/3, multiplexing means many requests and streams share one connection; header compression (HPACK, QPACK) and reuse of that connection make additional short requests—actions alongside /subscribe—cheap and straightforward. You still model actions as normal HTTP; you do not need a second transport just to avoid “too many round trips” when the browser already has an efficient connection to the origin.

The SSE format defines event IDs and Last-Event-ID for resumability after reconnect—that is part of the spec and useful when you truly need to replay or gap-fill events. In most practical apps, after a drop you reconnect and /subscribe simply sends the latest rendered state anyway; you are not obliged to implement a full event log on the client. Treat Last-Event-ID as available when you need it, not as a requirement for every UI.

Each event is often an HTML patch (or a small script or signal patch). The client does not reload the whole page; Datastar applies the payload to the live DOM.

Large HTML patches (“fat morphs”), compression, and morphing

Fat morph is a strategy: you send large HTML fragments (up to a full document) and rely on Datastar’s morph step to merge them into the live tree.

From a practical workflow, the recommendation is simpler: ship what you need—often a full re-render of the page or shell on each meaningful update—and only optimize (smaller patches, signal-only updates, finer selectors) when you have observed a real bottleneck. That matches avoiding premature optimization: measure first, then narrow payloads or branch logic.

When you do send large fragments, two mechanisms work in your favor:

  • Compression — Stario negotiates Content-Encoding per response on each Writer (Writer.respond, Server). On a long-lived SSE stream, the compressor can reuse context across chunks on that response body: the longer the stream stays up, the more repeated structure the codec often sees, which can improve ratio for bursty, similar HTML (behavior also depends on proxies and intermediaries).

  • Morphing — Datastar’s merge does not throw away the whole DOM on each patch. It updates what changed and leaves untouched subtrees alone, so local state on stable nodes (focus, third-party widgets, scroll in unpatched regions) survives more often than with a full innerHTML replace.

ui = f(state)

A useful rule: what the user sees should be a function of authoritative state you already keep—database, memory, downstream services—not of whatever happened to be on the event bus last. After an action completes, re-query that truth and render. Pushing raw domain events into the SSE payload is possible, but mixing “event log as UI driver” is a design choice; many teams prefer events for notifications, queries for pixels.

Actions: how the browser talks back

The usual starting point is declarative actions: data-on:* (via ds.on, ds.post, …) and signals that carry the last client-side form or field state when something interesting happens (Reading and writing Datastar signals).

From there you can optimize:

  • Event bubbling — listeners on a parent can inspect event.target so many children share one handler (fewer attributes, one place to branch). On the originating element you can add data-* attributes (for example data-row-id, data-index) and read them in the delegated handler via event.target.dataset—a small, explicit channel for ids, indexes, or flags without stuffing everything into signals.

  • Query parameters — there is no rule that every parameter must live in signals. Putting action or id in the URL is fine and often clearer for logs and links.

  • One handler vs many — a single POST /action that dispatches on a field is often easier to evolve than dozens of tiny routes. The opposite is also valid; it is a preference and team-style choice.

Relay: the notification bus

Relay connects publishers (commands finished, timers, other workers) to subscribers (SSE loops waiting to push new HTML). The command path often does not need to embed the new UI; it publishes “something changed” (topic + optional payload). The subscription path re-reads state and renders.

In most apps, Relay carries notifications only—not the full payload the UI needs. That keeps the bus small and cheap. You can still put data on events (for example: “command failed”) and let /subscribe send a toast or a tiny patch instead of a full morph.

stario.relay.Relay is in-process and single-process (Toolbox). For multiple workers or hosts, use a broker (NATS, Redis, …) and keep the same shape: publish after commit, subscribe in the SSE loop, re-query before patch.

Datastar signals: not the same as server state

Signals are JSON-shaped values mirrored to the server on actions (read_signals). They are not a substitute for authoritative server state. Prefer few signals: URLs, hidden fields, and server-rendered error regions often cover more than you think.

Practical roles for signals: form fields, scroll position on long pages, ephemeral input the server must see on the next action. When validation fails, the server can re-render the same page with errors in the HTML—you do not need a parallel “error signal” for every field.

A game analogy

Think of signals as the player transform—position and rotation you send up. The server is the world: it decides what is visible, whether a move is legal, what happens when you walk into a wall or fall off a cliff. Actions are inputs; the server remains authoritative on what the player may see and how state transitions happen. That is the same separation as commands vs queries: intent in, truth out.

For production, align the Datastar client version with how you ship HTML and static assets—the note below is about the Python helpers and bundled assets.