Skip to content

The durable runtime for AI agents

Continuous History
for every state change

A single binary — events, workflows, projections, and a complete audit trail. Nothing else to run.

Record every change. Replay any moment.

Your backend is scattered across too many systems

Infrastructure sprawl

Temporal for workflows, Kafka for events, a separate event store, Terraform to wire it together. Three to five systems — each needing its own expertise, its own upgrades, its own on-call rotation.

Usability wall

Every tool brings its own mental model, its own dashboard, its own way of debugging. Your team juggles abstractions just to trace one request through the system.

Barrier of entry

Event-driven architectures take months to set up properly. The learning curve is steep, the infrastructure is complex, and the team that built it becomes the only team that can maintain it.

Continuous History

A property of the system — not a feature you enable

Every state change — events, workflow steps, decisions — is captured automatically into one continuous, append-only timeline. No schema to define. No configuration to maintain. It works retroactively on everything already recorded.

The Lens

Point at any identifier — a transaction, a customer, an order — and see everything connected to it across all primitives. Events, runs, projections, all on one chronological timeline. No log archaeology. No stitching across dashboards.

One binary. Everything included.

Runtime
Single Go binary

Statically linked. No CGO. No external dependencies.

Messaging
NATS JetStream

Embedded. Event routing, pub/sub, and KV store.

Storage
SQLite / PostgreSQL

SQLite for dev, Postgres for prod. Auto-detected.

Dashboard
React UI

Built-in. Events, runs, projections, time-travel.

Push Mode

HTTP POST to your endpoint. For serverless — Next.js, Lambda, Cloud Functions. Ideal for tasks under 10 seconds.

Pull Mode

gRPC streaming from the server. For long-running workers with zero timeout limits. Crash-resilient with step memoization.

continuous-history-demo
1. Emit events
$ ironflow emit order.placed --data '{"orderId":"order-1","total":99.99}'
Event emitted: evt-a3f8 (order.placed)
Triggered 1 run(s): run-7f3a
$ ironflow emit order.placed --data '{"orderId":"order-2","total":149.00}'
Event emitted: evt-b7c2 (order.placed)
Triggered 1 run(s): run-9b2c
2. Query what was derived
$ ironflow projection get order-stats --json | jq '.state'
{ "totalOrders": 2, "totalRevenue": 248.99 }
3. Rewind time
$ ironflow inspect run-7f3a --replay
Frame 3/3 — step.completed 'send-confirm'
validate-order {"valid":true,"orderId":"order-1"}
process-payment {"charged":true,"amount":99.99}
send-confirm {"sent":true}

Built for systems that can't afford gaps

A customer disputes a charge. The entity stream already captured every state change — authorization, clearing, settlement, chargeback. Point the lens at the transaction ID and the full timeline appears. Scrub to the exact moment of authorization. See the step-by-step decision trail. No log archaeology, no cross-referencing three dashboards.

Explore in docs →

Suspicious transaction hits. A push-mode function fires in milliseconds, running parallel checks — velocity counters, merchant risk scores, behavioral models — all in one step.parallel() call. A projection tracks each model's accuracy in real time. Scoped injection lets the risk team replay the exact transaction with modified thresholds. No staging environment needed.

Explore in docs →

One entity stream, fed by every service that touches the customer — onboarding, transactions, support tickets, account changes. Three projections derive different views: 360-degree profile, churn risk score, lifetime value. Eight years of schema changes handled transparently by upcasters. No big-bang migrations.

Explore in docs →

A merchant applies. A pull-mode worker orchestrates document verification, compliance checks, risk scoring, and manual review — each as a memoized step. step.subscribe() pauses the workflow for human approval. If the process crashes at step 7 of 12, it resumes at step 7. The compliance team can rewind to any point and see exactly who approved what, when.

Explore in docs →

How Ironflow fits

Coming from Temporal or Inngest

Great workflow engines. But when you need event sourcing — entity streams, projections, time-travel debugging — you're adding a second system. Then a third for the event store. Ironflow starts from the recorded fact. Workflows, projections, and audit trails all derive from the same continuous history.

Coming from the AWS / GCP glue stack

Kafka for messaging, Lambda for compute, DynamoDB for state, Step Functions for orchestration, CloudWatch for debugging — five consoles to trace one request. Ironflow collapses that into one binary, one dashboard, one timeline.

Rolling your own

You've built event-driven systems before. You know it takes two to six months to get the foundations right — event store, replay, projections, idempotency. Ironflow is what you'd build if you had the time. Single binary. Boots in milliseconds. Production-ready.

See detailed comparison →

A vocabulary for your backend

Emit

Record events as permanent, immutable facts. The foundation everything else is built on.

React

Functions with memoized, durable steps. Crash at step 3 of 5? Resume at step 3, not step 1.

Derive

Projections compute real-time views from your event history. No queries, no batch jobs — always up to date.

Rewind

Scrub back to any moment. See exact state. Diff any two points. Replay with different inputs.

Running in 30 seconds

$brew install sahina/tap/ironflow
$ironflow serve --dev
# → open localhost:9123
$docker pull ghcr.io/sahina/ironflow:latest
$docker run -p 9123:9123 ghcr.io/sahina/ironflow:latest serve --dev
# → open localhost:9123
# Download from github.com/sahina/ironflow/releases
$./ironflow serve --dev
# → open localhost:9123
Follow the full tutorial →