Skip to content

Getting Started

Start the server, scaffold a project, emit events, derive state, and time-travel through your system’s Continuous History — all in about five minutes.

Continuous History: Entity Lifecycle — showing how events, workflows, projections, and time-travel connect in one unified history

1. Start the Server

Terminal window
brew install sahina/tap/ironflow
ironflow serve --dev

The server starts at http://localhost:9123 with:

  • Dashboard at /
  • API at /api/v1/*
  • Health check at /health

The --dev flag disables authentication so you can start building immediately. No API keys or passwords needed.

Production Mode

When you’re ready for real workloads, drop the --dev flag. Ironflow will auto-bootstrap an admin account and API key on first boot — see Security for details.


2. Create Your Project

Terminal window
ironflow init my-app
cd my-app

This scaffolds a working project with a function, projection, and worker — ready to run. ironflow init runs pnpm install automatically; pass --skip-install to opt out.

Manual setup

You can also install the SDK directly: npm install @ironflow/node (TypeScript) or go get github.com/sahina/ironflow/sdk/go/ironflow (Go). See the Installation guide for details.


3. Understand the Code

Open worker.ts — this single file contains a function, a projection, and a worker:

import {
createFunction,
createProjection,
createWorker,
type IronflowProjection,
} from "@ironflow/node";
// ── Types ───────────────────────────────────────────────────────
interface OrderData {
orderId: string;
total: number;
email: string;
}
// ── React: A function that processes orders ─────────────────────
// Every step is memoized. If the process crashes, it resumes
// from the last completed step. With recording enabled, every
// step is also permanently recorded for time-travel debugging.
const processOrder = createFunction(
{
id: "process-order",
triggers: [{ event: "order.placed" }],
recording: true,
},
async ({ event, step }) => {
const data = event.data as OrderData;
const order = await step.run("validate-order", async () => {
return {
valid: true,
orderId: data.orderId,
total: data.total,
};
});
const payment = await step.run("process-payment", async () => {
return {
charged: true,
amount: order.total,
transactionId: `txn_${Date.now()}`,
};
});
await step.run("send-confirmation", async () => {
return { sent: true, email: data.email };
});
return { order, payment };
},
);
// ── Derive: A projection that computes order statistics ─────────
// Projections are pure reducers. Every time an "order.placed"
// event is recorded, this reducer runs and the derived state
// is automatically persisted and queryable.
const orderStats = createProjection({
name: "order-stats",
events: ["order.placed"],
initialState: () => ({ totalOrders: 0, totalRevenue: 0 }),
handler: (
state: { totalOrders: number; totalRevenue: number },
event: { name: string; data: unknown },
) => ({
totalOrders: state.totalOrders + 1,
totalRevenue: state.totalRevenue + ((event.data as OrderData).total ?? 0),
}),
});
// ── Start the worker ────────────────────────────────────────────
const worker = createWorker({
functions: [processOrder],
projections: [orderStats as IronflowProjection],
});
worker.start().then(() => {
console.log("Worker started — listening for events");
});

Start the worker in a second terminal:

Terminal window
pnpm start

You should see: Worker started — listening for events


4. Emit Events

With the server and worker running, emit an event:

Terminal window
ironflow emit order.placed --data '{"orderId": "order-1", "total": 99.99, "email": "customer@example.com"}'

Watch the worker terminal — you’ll see the function pick up the event and execute each step.

Emit a few more to build up history:

Terminal window
ironflow emit order.placed --data '{"orderId": "order-2", "total": 49.50, "email": "another@example.com"}'
ironflow emit order.placed --data '{"orderId": "order-3", "total": 149.00, "email": "third@example.com"}'

5. See What Was Derived

The order-stats projection has been processing every order.placed event and maintaining a running total. Query it:

Terminal window
curl -s http://localhost:9123/api/v1/projections/order-stats | jq '.state.state'
{
"totalOrders": 3,
"totalRevenue": 298.49
}

You didn’t write any aggregation queries. The projection derived this state automatically from the recorded events. Emit another event and query again — the state updates in real time.

You can also see the projection in the Dashboard at http://localhost:9123 — navigate to Projections to see its status and current state.


5.5 See Durability in Action

What happens when something goes wrong mid-execution? Ironflow memoizes every completed step. If a function crashes, it resumes from the last successful step — not from scratch.

Try it: emit an event, then stop your worker mid-execution (Ctrl+C). Restart it:

Terminal window
pnpm start

The worker picks up the interrupted run and completes it from where it left off. Check the Runs page in the dashboard — you’ll see the run completed successfully despite the restart.

Resume timing

In --dev mode the scheduler reclaims an orphaned run after the stale-claim threshold (45s) elapses; the REST-worker cleanup adds up to 90s. Expect the run to resume within ~50s, not instantly. Production cluster mode uses a 2-minute default threshold (see Crash Resume for the full mechanics).

This is memoized execution: each step.run() result is persisted before moving to the next step. Crash at step 3 of 5? Steps 1 and 2 aren’t re-executed. The function resumes at step 3.


6. Rewind Time

Every step of every function run was permanently recorded. You can rewind to any moment.

Dashboard

  1. Open http://localhost:9123 and navigate to Runs
  2. Click any completed run
  3. Use the timeline scrubber at the top to drag back in time
  4. Watch the step outputs change as you scrub — you’re seeing the exact state of the run at that moment
  5. Click any two points to see a diff of what changed between them

CLI

Terminal window
# List your runs
ironflow run list
# Replay a run frame-by-frame (replace with your run ID)
ironflow inspect <run_id> --replay

In replay mode:

  • or l — next frame
  • or h — previous frame
  • g — first frame, G — last frame
  • j/↓ — navigate steps within current frame
  • k/↑ — navigate steps up
  • Tab — switch between Steps and Details panels
  • q — quit

What Just Happened?

In five minutes, you built a system with Continuous History:

  1. Emit — You recorded events (order.placed). These are permanent, immutable facts.
  2. React — A function processed each event with durable, memoized steps. If the process had crashed mid-execution, it would have resumed from the last completed step — not restarted.
  3. Derive — A projection automatically computed order statistics from the event stream. No queries, no batch jobs — the state is always up to date.
  4. Rewind — You scrubbed back through the execution timeline and saw the exact state at any moment. The audit trail wasn’t bolted on after the fact — it was always there.

This is the core idea: record every change, derive everything else from it. Events, workflow steps, projections, audit trails, time-travel — all from one continuous history.

Push and Pull Modes

This tutorial used Pull mode — a long-running worker that streams tasks from the server via gRPC. For serverless environments (Next.js, Lambda, Cloud Functions), Ironflow also supports Push mode — the server POSTs to your HTTP endpoint. Same functions, same SDK, different deployment model. See Workflows for details.


Next Steps