Skip to content

Why Your Backend Forgets Everything

Most backends are designed to forget.

Every UPDATE statement overwrites the previous value. Every state mutation erases what was there before. Your database holds the current state of the world, but the history of how it got there? Gone.

The Problem: State-Overwriting Architecture

Section titled “The Problem: State-Overwriting Architecture”

Consider a typical order processing system. When an order moves from “pending” to “paid”, you run:

UPDATE orders SET status = 'paid', paid_at = NOW() WHERE id = 'order-123';

The order is now “paid.” But what was it before? When did the transition happen? What steps did the system take between “pending” and “paid”? If you want answers, you need to have separately built logging, audit trails, and debugging infrastructure. And you almost certainly didn’t build all of it.

This is the state-overwriting architecture that dominates backend development:

Event happens → Update state → Previous state is gone
You are here

The consequences compound:

  • Debugging is archaeology. A customer reports a wrong charge. You grep through log files, piece together timestamps, and hope your logging was detailed enough. It usually isn’t.
  • Audit trails are afterthoughts. Compliance requires knowing who changed what and when. You bolt on an audit log table, but it’s never complete — developers forget to log things, or log them inconsistently.
  • Reprocessing is impossible. A bug corrupted orders from last Tuesday. You can’t replay the original events through the fixed logic because the events were never stored — only their side effects were.
  • Understanding is limited. You can see what the system looks like now, but not why it looks that way.

What If Your Backend Remembered Everything?

Section titled “What If Your Backend Remembered Everything?”

Imagine a different architecture. Instead of overwriting state, you record every change as an immutable event:

order.placed → order.paid → order.shipped → order.delivered
↓ ↓ ↓ ↓
[stored] [stored] [stored] [stored]

Current state? Derive it from the events. Need the state at any point in time? Replay the events up to that moment. Need to understand why something happened? Read the event sequence.

This is event sourcing, and it’s been used at scale by companies like Goldman Sachs, LMAX, and Walmart for decades. But it’s historically been hard to implement — you need a message broker, an event store, projection infrastructure, and a way to coordinate it all.

Continuous History: Recording + Deriving + Rewinding

Section titled “Continuous History: Recording + Deriving + Rewinding”

We built Ironflow around a simple idea: record every change, derive everything else from it.

It’s a single binary that gives you:

  1. Emit — Publish events as immutable facts
  2. React — Functions process events with durable, memoized steps (if the process crashes, it resumes from the last completed step)
  3. Derive — Projections automatically compute read models from events (no batch jobs, no manual aggregation)
  4. Rewind — Time-travel through any execution, seeing the exact state at any moment

Here’s a complete example — a function that processes orders, a projection that derives statistics, and a worker that runs it all:

import { createFunction, createProjection, createWorker } from "@ironflow/node";
const processOrder = createFunction({
id: "process-order",
triggers: [{ event: "order.placed" }],
recording: true, // Every step is permanently recorded
}, async ({ event, step }) => {
const order = await step.run("validate", async () => {
return { valid: true, orderId: event.data.orderId };
});
await step.run("charge-payment", async () => {
return { charged: true, amount: event.data.total };
});
return { order };
});
const orderStats = createProjection({
name: "order-stats",
events: ["order.placed"],
initialState: () => ({ totalOrders: 0, totalRevenue: 0 }),
handler: (state, event) => ({
totalOrders: state.totalOrders + 1,
totalRevenue: state.totalRevenue + event.data.total,
}),
});
const worker = createWorker({
functions: [processOrder],
projections: [orderStats],
});
worker.start();

No separate message broker to configure. No event store to set up. No projection infrastructure to manage. One binary, one SDK, five minutes from zero to running.

Terminal window
brew install sahina/tap/ironflow
ironflow serve --dev
ironflow init my-app && cd my-app

Start the worker, emit an event, and watch it flow through the system — from event to function to projection to time-travel:

Terminal window
pnpm dev # Start the worker
ironflow emit order.placed --data '{"orderId":"ord-1","total":49.99}'
ironflow inspect <run-id> # Time-travel through execution

The Getting Started tutorial walks through the full experience in five minutes.

Your backend doesn’t have to forget everything. It just needs the right architecture.