CQRS + Event Sourcing
A practical walkthrough you can follow step-by-step. We take one example — placing an order — and carry it through every CQRS layer using Ironflow’s primitives.
If you want the conceptual mapping first, read CQRS with Projections. This tutorial shows the code.
The Scenario
A customer places an order. We want to:
- Validate the command and produce events
- Persist events atomically with optimistic concurrency
- Build two projections:
OrderDetailView(customer-facing order page) andCustomerOrdersList(account dashboard) - Serve queries from those projections
How Ironflow Shapes This
Ironflow ships the pieces you’d otherwise build yourself. A few differences from a generic CQRS stack are worth calling out up front:
- No aggregate class. Ironflow has no
Aggregateprimitive. The consistency boundary lives in an entity stream, and the “aggregate method” is a pattern you write inside a function or handler: read the stream, fold state, validate, append. - No event-store table to manage.
streams.appendis the event store.expectedVersionis optimistic concurrency. The stream is the source of truth. - No outbox pattern to build.
streams.appendwrites the event and an outbox row in one transaction; Ironflow’s built-in outbox worker drains to NATS JetStream asynchronously. Projections consume from there with automatic checkpoints. - No checkpoint table to own. Managed projections checkpoint automatically. External projections just implement an idempotent handler.
- Rebuild is one API call.
POST /api/v1/projections/{name}/rebuild.
What’s still on you: causation/correlation IDs inside data. Command-level dedup is handled by ironflow.commandDedup() (Step 3). Read-your-writes is handled differently per SDK — the Node client exposes projections.waitForEvent(eventId) and projections.waitForCatchup({ minSeq }); the browser client uses subscribeToProjection for live updates (Step 10).
Project Layout
Below are recommended layouts that match each stack’s conventions. The walkthrough’s snippets map directly onto these paths, and each step notes which files it lives in.
Feature-sliced — one folder per bounded context, with thin top-level composition files:
src/├── orders/ # "orders" bounded context│ ├── commands/│ │ └── place-order.ts # PlaceOrderCommand + placeOrderHandler (Steps 1, 3)│ ├── domain/│ │ ├── aggregate.ts # foldOrder + placeOrder + OrderState (Step 4)│ │ └── events.ts # OrderPlacedEvent, OrderShippedEvent, … (Step 5)│ ├── projections/│ │ ├── order-detail-view.ts # managed projection (Step 8)│ │ └── customer-orders-list.ts # partitioned projection (Step 8)│ ├── read/│ │ └── queries.ts # wraps ironflow.getProjection for the UI (Step 9)│ └── infra/│ ├── customer-repo.ts # enrichment (Step 3)│ ├── product-catalog.ts│ └── command-dedup.ts # KV-backed idempotency (Step 3)├── api/│ └── orders.ts # HTTP POST handler, returns 202 (Step 2)├── worker.ts # createWorker(...) — projections + functions (Step 8)└── ironflow.ts # createClient() shared by api + workerConventions:
kebab-case.tsfor filenames,PascalCasefor types,camelCasefor functions.- One command per file under
commands/; the handler lives beside its command type. - Each projection is its own small module exporting a
createProjection(...)call. worker.tsis the composition root for the read side;api/orders.tsis the write-side entry.- For Next.js,
app/api/orders/route.tsforwards tosrc/api/orders.ts, andapp/orders/[orderId]/page.tsxconsumessrc/orders/read/queries.ts.
Standard layout — one package per bounded context under internal/, binaries under cmd/:
cmd/├── orderserver/│ └── main.go # HTTP server mounts api handlers└── orderworker/ └── main.go # worker entry: registers projections + functions (Step 8)internal/└── orders/ ├── place_order.go # PlaceOrderCommand struct + PlaceOrderHandler (Steps 1, 3) ├── aggregate.go # FoldOrder + PlaceOrder + OrderState (Step 4) ├── events.go # OrderPlaced, OrderShipped, … payload structs (Step 5) ├── projections.go # order-detail-view, customer-orders-list (Step 8) ├── queries.go # GetOrderDetail / ListCustomerOrders wrappers (Step 9) ├── repo.go # customer + product enrichment (Step 3) └── dedup.go # KV-backed idempotency (Step 3)api/└── orders.go # net/http handler, returns 202 (Step 2)Conventions:
- Package name = short lowercase bounded-context name (
orders) — no underscores or plural/singular mixing. snake_case.gofilenames; exportedCamelCasesymbols.- Tests sit next to source as
*_test.go. internal/ordersis consumed byapi/andcmd/; anything external imports from there, never fromcmd/.- Each
cmd/<binary>/main.gois a thin composition root — wire dependencies, start the server or worker, return.
These layouts scale from a small service to dozens of bounded contexts without shuffling files. Start small — the cqrs-order example in the repo uses a flat lib/ for readability; graduate to one of the above once you have more than one context.
Step 1 — Define the Command
Commands are intent. Imperative, named in present tense, carrying IDs and enough context to attempt the action — not enriched data.
type PlaceOrderCommand = { // Domain payload data: { orderId: string; customerId: string; items: { productId: string; qty: number }[]; shippingAddress: { street: string; city: string; zip: string; country: string }; }; // Plumbing — not a domain concern, propagates onto events metadata: { commandId: string; // idempotency key issuedAt: string; issuedBy: string; // auth context correlationId?: string; // originating request/workflow tenantId?: string; traceId?: string; };};Keep the metadata split deliberate. data is what the domain needs to decide; metadata is cross-cutting plumbing that rides along on the resulting events. Generate commandId at the client or API boundary for dedup.
Step 2 — API Endpoint Receives the Command
Expose the command via a thin HTTP handler. In Ironflow you have two patterns:
- Next.js / Express / Lambda route that authenticates the user, constructs the command, and calls your handler directly.
- Ironflow function with an HTTP trigger if you want durable retries on the dispatch itself.
Either way, the endpoint is thin: authenticate, deserialize, dispatch, return 202 Accepted.
// app/api/orders/route.ts (Next.js)export async function POST(req: Request) { const user = await authenticate(req); const body = await req.json();
const command: PlaceOrderCommand = { data: { orderId: body.orderId, customerId: user.id, items: body.items, shippingAddress: body.shippingAddress, }, metadata: { commandId: crypto.randomUUID(), issuedAt: new Date().toISOString(), issuedBy: user.id, correlationId: req.headers.get("x-request-id") ?? undefined, tenantId: user.orgId, traceId: req.headers.get("traceparent") ?? undefined, }, };
await placeOrderHandler(command); return Response.json( { orderId: command.data.orderId, commandId: command.metadata.commandId }, { status: 202 } );}Return 202 Accepted because read models are eventually consistent. The write succeeded; the projection might not be updated yet. See Step 10.
Step 3 — Command Handler
The handler orchestrates: check idempotency, fetch enrichment data, load the stream, decide, append. Business rules live in the domain function called from the handler — not the handler itself.
Helper identifiers in this walkthrough
Real Ironflow SDK exports used below:
- From
@ironflow/node:createClient,createProjection,createWorker,CommandDedup(type). Methods on the client instance:streams.append,streams.read,emit,commandDedup(),projections.waitForEvent,projections.waitForCatchup. - From
@ironflow/browser:ironflow(singleton), withironflow.configure,ironflow.getProjection,ironflow.subscribeToProjection.
Everything else — customerRepo, productCatalog, authenticate, elasticsearch, foldOrder, placeOrder, placeOrderHandler, the PlaceOrderCommand / OrderState / OrderDetail / OrderSummary types — is illustrative user code you would write in your own app. They stand in for your repositories, auth middleware, domain types, and domain logic so the CQRS shape stays visible. Replace with your real implementations. commandDedup in Step 3 is a real SDK export from @ironflow/node.
import { createClient } from "@ironflow/node";
// `ironflow` is a client instance. Reuse across requests in your app.const ironflow = createClient({ apiKey: process.env.IRONFLOW_API_KEY! });
// Create once at module level — reuse across all requests.// Bucket creation is lazy — happens on the first operation.const dedup = ironflow.commandDedup<{ orderId: string }>("order-commands");
async function placeOrderHandler(cmd: PlaceOrderCommand) { const { data, metadata } = cmd;
// 1. Idempotency — claim before any work. // Returns null if this caller wins the race (proceed). // Returns the prior entry if another caller already claimed this commandId. const prior = await dedup.tryClaim(metadata.commandId, { orderId: data.orderId }); if (prior !== null) return prior; // duplicate — return cached result
try { // 2. Enrichment — fetch "current" values so we can freeze them into the event const customer = await customerRepo.getById(data.customerId); const products = await productCatalog.getMany(data.items.map(i => i.productId));
// 3. Load current stream state. `read` returns { events, totalCount }. const { events } = await ironflow.streams.read(data.orderId); const state = foldOrder(events);
// 4. Decide — validate invariants, produce new events (domain-only) const newEvents = placeOrder(state, data, customer, products, metadata.issuedAt);
// 5. Append atomically with optimistic concurrency. The `metadata` option // keeps cross-cutting plumbing out of domain payloads — downstream // handlers and projections read it via `event.metadata`. const eventMetadata = { causationId: metadata.commandId, correlationId: metadata.correlationId ?? metadata.commandId, tenantId: metadata.tenantId, traceId: metadata.traceId, issuedBy: metadata.issuedBy, }; for (const ev of newEvents) { // Consume the server-assigned entityVersion — never increment locally. // The server is the source of truth for stream version under concurrency. const { entityVersion } = await ironflow.streams.append(data.orderId, { entityType: "order", name: ev.name, data: ev.data, }, { expectedVersion: state.version, metadata: eventMetadata, }); state.version = entityVersion; }
// 6. Finalize — subsequent callers with the same commandId receive this result. await dedup.finalize(metadata.commandId, { orderId: data.orderId }); } catch (err) { // Release the claim so honest retries can proceed after a handler failure. await dedup.release(metadata.commandId).catch(() => {}); throw err; }}Command idempotency via the SDK
ironflow.commandDedup<T>(bucketName) returns a CommandDedup<T> backed by NATS KV. The KV bucket is created lazily on the first operation. tryClaim(commandId, claim) atomically reserves the commandId before any handler work — the first caller wins and proceeds; concurrent or retried callers receive the prior entry without re-running the handler. Call finalize(commandId, result) after success to stamp the final result, and release(commandId) in the catch block so honest retries can proceed after a failure. Never call release after finalize succeeds — it would delete the finalized result and allow replay.
Concurrency retry omitted for clarity
The snippet elides the optimistic-concurrency retry loop. In production, wrap steps 3–5 in a retry that catches the HTTP 409 from streams.append, reloads the stream, re-runs foldOrder + placeOrder, and re-appends. Step 6 below describes the mechanics. The runnable cqrs-order example relies on the place-order function’s step-memoized retries, so a 409 becomes a durable retry at the function boundary instead of a manual loop.
Step 4 — The “Aggregate” (a Pattern, Not a Class)
A pure function that folds events into state, and a pure function that decides new events given current state + command. No framework class required.
type OrderState = { id: string; status: "placed" | "shipped" | "cancelled" | null; items: LineItem[]; customerId: string | null; totalAmount: number; version: number;};
function foldOrder(events: StreamEvent[]): OrderState { const initial: OrderState = { id: "", status: null, items: [], customerId: null, totalAmount: 0, version: 0 }; return events.reduce((state, ev) => { switch (ev.name) { case "order.placed": return { ...state, id: ev.data.orderId, status: "placed", items: ev.data.items, customerId: ev.data.customer.id, totalAmount: ev.data.totalAmount, version: ev.entityVersion }; case "order.shipped": return { ...state, status: "shipped", version: ev.entityVersion }; case "order.cancelled": return { ...state, status: "cancelled", version: ev.entityVersion }; default: return { ...state, version: ev.entityVersion }; } }, initial);}
function placeOrder( state: OrderState, data: PlaceOrderCommand["data"], customer: Customer, products: Product[], occurredAt: string,) { if (state.status !== null) throw new Error("Order already exists"); if (customer.isBlocked) throw new Error("Customer cannot place orders"); if (products.some(p => !p.isAvailable)) throw new Error("Product unavailable");
const lineItems = products.map(p => ({ productId: p.id, name: p.name, // embedded snapshot — frozen forever price: p.currentPrice, // embedded snapshot qty: data.items.find(i => i.productId === p.id)!.qty, })); const total = lineItems.reduce((sum, li) => sum + li.price * li.qty, 0);
// Domain-only payload. Plumbing (causation/correlation/tenant/trace) // is added by the handler when it appends — see Step 3 and Step 5. return [{ name: "order.placed", data: { orderId: data.orderId, customer: { id: customer.id, name: customer.name, email: customer.email }, items: lineItems, shippingAddress: data.shippingAddress, totalAmount: total, occurredAt, }, }];}The event payload embeds customer.name, price, name, etc. Those values are frozen in the event forever — this is what makes rebuilds work.
Step 5 — Event Shape and Metadata
Every event has three parts: payload (data, domain facts), user metadata (metadata, cross-cutting plumbing like tenant/trace IDs), and platform metadata (stamped by Ironflow).
{ // Platform-assigned (returned from read/subscribe) id: "evt_xyz789", // eventId name: "order.placed", entityVersion: 1, // position within this entity's stream version: 1, // schema version of this event type timestamp: "2026-04-17T14:22:01Z", source: "api", // api | cron | webhook
// You control — domain payload data: { orderId, customer, items, shippingAddress, totalAmount, },
// You control — cross-cutting metadata metadata: { causationId: "cmd_a1b2c3", correlationId: "req_789", tenantId: "org_123", traceId: "otel_abc", issuedBy: "user_456", }}SDK support today
ironflow.emit(name, data, { metadata })— first-class. Metadata is delivered to push-mode function handlers asevent.metadata.await ironflow.emit("order.placed", orderData, {metadata: { causationId: metadata.commandId, correlationId: metadata.commandId, tenantId: metadata.tenantId },});ironflow.streams.append(entityId, { entityType, name, data }, { expectedVersion, metadata })— first-class. Metadata rides alongside the event (not insidedata) and is persisted to the store:await ironflow.streams.append(data.orderId, {entityType: "order",name: "order.placed",data: domainPayload,}, {expectedVersion: state.version,metadata: { causationId: metadata.commandId, correlationId: metadata.commandId, tenantId: metadata.tenantId },});streams.readreturnsmetadataon each event, and downstream handlers and projections read it viaevent.metadata.- Pull-mode handlers and projections —
event.metadata(Go:event.Metadata) is delivered to push-mode functions, pull-mode workers, and projection handlers alike. Causation/correlation/tenant IDs you stamp onemit()orstreams.append()are available everywhere.
Why keep metadata separate
Metadata is for plumbing, not domain semantics. Projections usually ignore it; logs, audits, and tracing consume it. Keeping it out of data stops cross-cutting concerns from polluting your domain events — and lets the SDK thread it uniformly across push/pull/projection handlers.
For schema evolution (adding fields, restructuring), use upcasters — see the Event Versioning guide.
Step 6 — Persisting Events
streams.append is the event store. One call per event, atomic, with optimistic concurrency.
const { entityVersion, eventId, sequence } = await ironflow.streams.append( "order-123", { entityType: "order", name: "order.placed", data: { /* ... */ } }, { expectedVersion: 0 } // -1 (default) = no guard, 0 = create-only, N = exact prior version);AppendOptions.expectedVersion defaults to -1 (no guard). Pass 0 to require an empty stream (create-only), or pass the version observed when you read the stream to guard against concurrent writers.
Behavior:
- Create-only:
expectedVersion: 0fails if the stream already has events → HTTP 409. - Optimistic concurrency:
expectedVersion: Nfails if another writer advanced the stream first. Catch the 409 and retry (reload + re-decide + re-append). - Atomicity: the append is atomic per event. Keep the per-command event count small — most CQRS decisions emit one event anyway.
sequence: always returned as0today because the outbox publishes asynchronously. UseeventIdwithprojections.waitForEvent()for read-your-writes (Step 10).
That’s the whole event store. No global_position column to maintain, no unique constraint to add, no migration.
Step 7 — Publishing: No Outbox Needed
You’d normally write an outbox table in the same transaction as the event, then polish a publisher to ship rows to Kafka/NATS. Ironflow does this for you.
When streams.append returns success, the event is committed to the store and an outbox row is written in the same transaction. A per-node outbox worker drains the outbox asynchronously to NATS JetStream (stream EVENTS, subject ironflow.{project}.{env}.events.{name}), where projections consume with durable NATS consumers. Checkpoints are tracked automatically. Because publish is async, streams.append returns sequence: 0 — use projections.waitForEvent(eventId, ...) when you need read-your-writes (see Step 10 and Implementation Order → Step 4).
You write zero publisher code. Skip this step.
Step 8 — Projectors Build Read Models
Define each projection with createProjection. Ironflow subscribes, feeds events, and stores the state you return.
OrderDetailView — managed projection
import { createProjection, createWorker } from "@ironflow/node";
// The handler receives (state, event, ctx). The typed event surface is// { name, data }; timestamps and ids come off ctx.event (a Date for ctx.event.timestamp).const orderDetail = createProjection({ name: "order-detail-view", events: ["order.placed", "order.shipped", "order.cancelled"], initialState: () => ({ orders: {} as Record<string, OrderDetail> }), handler: (state, event, ctx) => { const data = event.data as any; const ts = ctx.event.timestamp.toISOString(); const orderId = data.orderId; const existing = state.orders[orderId] ?? {};
switch (event.name) { case "order.placed": state.orders[orderId] = { orderId, customerId: data.customer.id, customerName: data.customer.name, customerEmail: data.customer.email, items: data.items, shippingAddress: data.shippingAddress, totalAmount: data.totalAmount, status: "placed", placedAt: ts, }; break; case "order.shipped": state.orders[orderId] = { ...existing, status: "shipped", trackingNumber: data.trackingNumber, shippedAt: ts }; break; case "order.cancelled": state.orders[orderId] = { ...existing, status: "cancelled", cancelledAt: ts, cancellationReason: data.reason }; break; } return state; },});CustomerOrdersList — partitioned managed projection
Partition by customerId so each customer’s dashboard is a separate state blob.
const customerOrders = createProjection({ name: "customer-orders-list", events: ["order.placed", "order.shipped", "order.cancelled"], partitionKey: "$.data.customer.id", initialState: () => ({ orders: [] as OrderSummary[] }), handler: (state, event, ctx) => { const data = event.data as any; if (event.name === "order.placed") { state.orders.unshift({ orderId: data.orderId, placedAt: ctx.event.timestamp.toISOString(), totalAmount: data.totalAmount, status: "placed", summary: `${data.items.length} items`, }); } else { const row = state.orders.find(o => o.orderId === data.orderId); if (row) row.status = event.name.split(".")[1]; } return state; },});Start the worker
const worker = createWorker({ functions: [], projections: [orderDetail, customerOrders],});await worker.start();Workers need at least one function today
Ironflow’s server currently rejects worker registration with an empty functions array. A projection-only worker won’t start. The common workaround — shown in the cqrs-order example — is to host the command handler as an Ironflow function (triggered by an emitted create.order event) and have the HTTP route emit instead of calling the handler synchronously. You get the same CQRS shape plus durable retries and step memoization for free.
Same events, two independent read models, independent checkpoints. Add, remove, or rebuild projections without touching the write side.
For side-effect sync (Elasticsearch, Algolia, Redis) use an external projection — omit initialState and make the handler async (event, ctx) => Promise<void>. Use ctx.event.id as the idempotency key so replays are safe.
Step 9 — Queries Hit Read Models
Queries become trivial — no aggregate loading, no event folding.
import { ironflow } from "@ironflow/browser";
// Call once at app startup. Browser `ironflow` is a singleton.ironflow.configure({ serverUrl: "https://api.example.com" });
const detail = await ironflow.getProjection("order-detail-view");const order = (detail.state as any).orders["order-123"];
const dashboard = await ironflow.getProjection("customer-orders-list", { partition: "cust-456",});curl $IRONFLOW_URL/api/v1/projections/order-detail-viewcurl "$IRONFLOW_URL/api/v1/projections/customer-orders-list?partition=cust-456"Real-time updates use subscribeToProjection, which pushes new state over the server’s WebSocket as events arrive:
const sub = await ironflow.subscribeToProjection("order-detail-view", { onUpdate: (state, event) => { // state is the new projection state; event is { id, name } },});// sub.unsubscribe() when you're done.Step 10 — Handling Eventual Consistency in the UI
Because Step 2 returns 202 before projections are updated, the client might redirect to /orders/{orderId} and get a 404. Options:
- Wait for the write on the server (recommended for entity-stream events) — call
ironflow.projections.waitForEvent(eventId, "order-detail-view", { partition, timeoutMs: 5000 })using theeventIdreturned fromstreams.append, then return the 202 only after the projection has applied it. This is the canonical read-your-writes path becausestreams.appendreturnssequence: 0, sowaitForCatchup({ minSeq })cannot be used with entity-stream events. - Optimistic rendering — reconstruct the expected shape from the command, render it, then reconcile when the projection catches up. Best fit for SPAs.
- Subscribe after redirect — call
ironflow.subscribeToProjection("order-detail-view", { onUpdate })from the browser and wait for theorderIdkey to appear in the pushed state. Usually <200ms. - Just accept the lag — for many domains, 200–500ms is invisible.
Step 11 — Rebuilding Projections
The payoff of the whole architecture. When a projection schema changes or a bug is fixed, you rebuild:
# Reset and rebuild from the beginning of the event logcurl -X POST $IRONFLOW_URL/api/v1/projections/order-detail-view/rebuildIronflow clears the materialized state, resets the checkpoint, and replays all historical events through your handler. The projection status transitions rebuilding → active when it catches up; the dashboard Projections page shows lag in real time. Poll GET /api/v1/projections/{name}/rebuild or watch the ironflow_projection_rebuild_* Prometheus metrics for progress.
For a safer cutover, keep the old projection online while you rebuild a v2 alongside it:
- Deploy
order-detail-view-v2with the new handler/schema. - Wait for the v2 lag to reach zero.
- Flip your query code from v1 to v2.
- Delete v1:
DELETE /api/v1/projections/order-detail-view.
Zero downtime, no data loss, and you can diff outputs during cutover. This is why events carry embedded snapshots — v2 can rebuild itself from the event log without depending on whatever customer/product data happens to exist today.
The Cheat Sheet
Write side: API endpoint → command handler (dedup, enrich, fold, decide) → streams.append with expectedVersion. Events hit NATS automatically.
Read side: createProjection(...) × N, each with independent checkpoints → queries hit getProjection or REST, with no joins or logic.
Enrichment: events carry ID and snapshot values for decision-relevant attributes. Rebuilds stay self-contained.
Consistency boundary: one entity stream is strongly consistent via expectedVersion. Everything downstream is eventually consistent.
Rebuild: POST /api/v1/projections/{name}/rebuild. Treat it as a first-class capability.
What Ironflow Doesn’t Do (Yet)
Be honest with your team:
- Choreography sagas — Ironflow does orchestrated sagas via function compensation. Chain emits manually if you need choreography.
Implementation Order
Ship velocity matters, but a few disciplines belong in the foundation rather than bolted on later. Concurrency control, idempotency, and a testing idiom are what separate event sourcing that scales from event sourcing that becomes a regret. The ordering below front-loads those, then layers capabilities outward.
Step 0 — Model before you code
Event-storm the slice with the people who know the domain. Name the aggregate. Define the consistency boundary (what must be transactionally consistent vs. what can lag). Pick past-tense, domain-language event names (OrderPlaced, not CreateOrderEvent). Skipping this is how teams end up with OrderManager god-streams that span half the business.
References: Eric Evans (DDD), Alberto Brandolini (Event Storming), Vaughn Vernon (“Implementing Domain-Driven Design”).
Step 1 — Walking skeleton with discipline baked in
One stream, one event type, one projection, one query — end-to-end. From day one include:
expectedVersionon every append. Optimistic concurrency is the defining invariant of an event-sourced aggregate. Without it you have an event log, not event sourcing. Even with one writer, ship the guard.- Given-When-Then aggregate tests. Given prior events, when command, then new events. This is the ES testing idiom — your domain logic should be unit-testable without a database, a server, or a clock.
Step 2 — Command idempotency at the API boundary
At-least-once delivery is a day-one concern. The moment you have an HTTP endpoint and a retrying client, duplicates happen. A small command_dedup KV bucket keyed by commandId is enough — first writer wins, retries return the previous result. Do not treat idempotency as an optimization; it is correctness.
Step 3 — Second projection + correlation IDs on every event
Add a second projection on the same events to prove the “one event, many views” principle. While you’re touching the write side, stamp causationId, correlationId, and tenantId on every event (Step 5 of the walkthrough). Debugging an event-sourced system without correlation chains is brutal; adding them later is harder than it sounds because you can’t backfill what wasn’t recorded.
Step 4 — Concurrency retry loop + read-your-writes UX
Wrap the load–decide–append cycle in a retry that catches the 409 from streams.append, reloads the stream, re-runs the aggregate, and re-appends. For read-your-writes, use ironflow.projections.waitForEvent(eventId, projectionName, { partition, timeoutMs: 5000 }) — pass the eventId returned by streams.append. The server resolves the event ID to its NATS sequence once the outbox worker publishes, then blocks until the projection has processed it (or the deadline fires). Under the transactional outbox (#487), streams.append does not return a usable sequence anymore; it’s always 0 because the publish happens asynchronously after the transaction commits. This is where product usually pushes back on CQRS — have an answer ready.
Step 5 — Second event type + upcasters + rebuild rehearsal
You will version events on day one whether you planned to or not. Before your second deploy, add an upcaster for one event and rehearse a full projection rebuild against prod-volume data. Teams discover at 3 a.m. that their rebuild takes 40 hours. Measure it now, while the stakes are low.
Step 6 — External projections (search, cache, notifications)
Now that the core loop is solid, add side-effect projections (Elasticsearch, Algolia, Redis, email). These need idempotent handlers — use ctx.event.id as the dedup key so replays are safe. The discipline you built in Steps 1–4 is what makes these tractable.
Step 7 — Process managers when cross-aggregate workflows appear
The first time one aggregate needs to react to another’s events, introduce a process manager (saga). In Ironflow this is an orchestrated function with compensation. Resist the urge to do this earlier — premature sagas are worse than no sagas.
Step 8 — Partition high-volume projections; snapshot if measured
Partition the projection that’s actually hot (use the partition browser to see lag per key). Add snapshots only when fold latency exceeds your budget on a real stream — most streams stay short enough that snapshotting is unnecessary complexity. Measure first.
Infrastructure concerns like clustering and multi-node projections come after the core loop is rock-solid. See the Event Sourcing how-to guides for each piece in depth.