Comparison
Platform Comparison
Section titled “Platform Comparison”Last verified: April 2026. Vendor facts decay — please file an issue if anything below is out of date.
Every comparison below answers the same question: where does the product start?
The starting point determines what’s native and what’s bolted on. Ironflow starts from the recorded fact — every event, every workflow step, every state change is the same kind of thing in one continuous history. Competitors start from workflows, event streams, or function triggers. They can add the other capabilities, but the foundation determines which ones emerge naturally and which feel stapled on.
High-Level Comparison
Section titled “High-Level Comparison”| Feature | Ironflow | Temporal | Inngest | Hatchet | Restate | Kurrent | Kafka |
|---|---|---|---|---|---|---|---|
| Starting Point | The recorded fact | The workflow | The function trigger | The durable task | The state machine | The event stream | The append-only log |
| Architecture | Single Binary (Embedded NATS) | Cluster (Server + DB + ES + UI) | Managed Cloud / Sidecar | Server binary + PostgreSQL | Cluster / Proxy | Cluster (Server + DB) | Cluster (Brokers + ZK) |
| Unified History | Native — events and execution in one timeline | No — separate systems | No — events are signals | No — execution event log only | No — state machines only | Partial — events only, no execution | No — log only, assemble everything |
| Local DX | Zero-Config | Docker Compose required | Docker or Cloud required | PostgreSQL required (Docker) | Docker required | Docker required | Docker Compose required |
| Execution | Push (HTTP) & Pull (gRPC) | Pull (gRPC) | Push (HTTP) | Pull (gRPC) | Push (HTTP) | N/A (no execution) | N/A (no execution) |
| State Store | SQLite / Postgres | MySQL / PG / Cassandra | Managed (Proprietary) | PostgreSQL only | Embedded RocksDB | Event Store | Topic partitions |
| Learning Curve | Low | High (Complex primitives) | Low | Low | Medium | Medium (ES concepts) | High (Ops-heavy) |
vs. Temporal
Section titled “vs. Temporal”Temporal starts from the workflow. Ironflow starts from the recorded fact. Temporal can add event storage, but workflow steps and domain events will always be separate systems that happen to coexist — a bundle. In Ironflow, a workflow step completing IS a recorded fact, the same kind of thing as a domain event. That’s why you can query “show me everything that happened to order-123” and see both in one timeline. That capability emerges from the foundation — you can’t bolt it on.
Temporal is the gold standard for durable execution at enterprise scale. If you need millions of concurrent workflows across a distributed cluster, Temporal is battle-tested.
Where Ironflow differs: Temporal treats workflows and events as separate concerns. You can emit signals and queries, but the workflow history and your domain events live in different systems. In Ironflow, they’re the same history — which is why history navigation (time-travel debugging), history correction (hot patching), and unified entity timelines work out of the box.
| Ironflow | Temporal | |
|---|---|---|
| Setup | ironflow serve | temporal server start-dev or Docker Compose (cluster) |
| Unified history | Events + execution in one timeline | Workflow history only |
| Debugging | History navigation, correction, editing | Replay-based, Web UI |
| Scale model | Single node (100k+ runs) | Distributed cluster (millions) |
Choose Temporal if you need massive distributed scale with proven enterprise support. Choose Ironflow if you want events and execution in one continuous history with zero infrastructure.
vs. Inngest
Section titled “vs. Inngest”Inngest starts from the function trigger. Events kick off workflows, but the events are signals — they disappear after delivery. In Ironflow, events are permanent recorded facts. Workflow steps are permanent recorded facts. The trigger and the execution live in the same history. Even if Inngest added event persistence, those would be two separate systems sharing a product, not one unified history.
Inngest delivers excellent developer experience for serverless workflows. If you’re building on Vercel/Next.js and want managed infrastructure, Inngest is polished.
Where Ironflow differs: Inngest treats events as triggers — they start functions then disappear. There’s no event history to query, no projections to derive, no entity timeline to browse. Ironflow treats events as permanent facts that power projections, entity streams, and time-travel.
| Ironflow | Inngest | |
|---|---|---|
| Events | Permanent recorded facts | Ephemeral triggers |
| Event sourcing | Native (entity streams, projections) | Not available |
| Hosting | Self-hosted (single binary) | Managed cloud |
| Execution modes | Push + Pull | Push only |
Choose Inngest if you want a managed cloud service with zero ops for serverless functions. Choose Ironflow if you want your events to be permanent history, not disposable triggers.
vs. Hatchet
Section titled “vs. Hatchet”Hatchet starts from the durable task. Workers pull tasks from PostgreSQL, every checkpoint persists, and the dashboard replays the execution log. It’s a sharper Temporal: same shape, simpler ops, MIT-only. But the durability is execution-shaped, not domain-shaped — if you want your domain events and your workflow steps in the same history, Hatchet keeps them in different stores.
Hatchet is the cleanest “Temporal but Postgres-only” pitch in the market. If your team already runs Postgres, doesn’t need a separate broker, and wants a familiar workflow-engine model, Hatchet is sharp.
Where Ironflow differs: Hatchet’s “durable” means the execution event log — task started, task completed, retry attempted. Ironflow’s “durable” means the recorded fact — your domain events and your workflow steps in one continuous history. That’s why entity timelines, projections, and time-travel across workflows + events all work in Ironflow and don’t exist in Hatchet.
| Ironflow | Hatchet | |
|---|---|---|
| Storage | SQLite (dev) / Postgres (prod) + embedded NATS | PostgreSQL only (PG is also the queue) |
| Durability scope | Domain events + execution steps in one history | Execution event log only |
| Execution modes | Push + Pull | Pull only |
| Agent surface | agent() handler with ctx-injected tool / llm / approve / memory / spawn + MCP server (ironflow mcp) | Generic tasks (agents are use case, not API) |
| History tools | Time-travel, hot patching, scoped injection, TUI/DAP | Dashboard replay |
Choose Hatchet if you want a sharper Temporal with Postgres-only operational simplicity for plain task workloads. Choose Ironflow if you want events and execution in one history, agent-native primitives, and history navigation/correction built in.
vs. Kurrent (EventStoreDB)
Section titled “vs. Kurrent (EventStoreDB)”Kurrent starts from the event stream. Ironflow starts from the recorded fact. Kurrent records your data changes — and they’re adding SQL projections. But your workflow execution, your authorization decisions, your debugging sessions will always live somewhere else. In Ironflow, those are all recorded facts in the same history. Cross-concern time-travel and compliance-as-a-byproduct emerge from that unification.
Kurrent is the pioneer of event sourcing databases. If you need a dedicated, high-performance event store with mature subscription APIs, Kurrent is the original.
Where Ironflow differs: Kurrent stores your domain events but doesn’t execute workflows. You need a separate orchestrator (Temporal, MassTransit, etc.) for durable execution. In Ironflow, workflow steps and domain events are both recorded facts in the same history — so you can see everything that happened to an entity in one view.
| Ironflow | Kurrent | |
|---|---|---|
| Event storage | Built-in (entity streams) | Dedicated event store |
| Workflow execution | Built-in (durable, memoized) | Not available (need separate tool) |
| Projections | Built-in (managed + external) | Built-in (JS projections, adding SQL) |
| Unified history | Events + execution together | Events only |
Choose Kurrent if you need a dedicated event store with mature clustering and subscription APIs. Choose Ironflow if you want event sourcing and workflow execution unified in one history.
vs. Kafka
Section titled “vs. Kafka”Kafka gives you the philosophy (append-only log) without the platform. You get the foundation but you assemble everything yourself — consumers, projections, workflow orchestration, monitoring. Ironflow gives you the same philosophy as a complete platform. Same starting point, none of the assembly.
Kafka is the industry standard for high-throughput event streaming. If you need to process millions of events per second across a distributed cluster, Kafka is proven.
Where Ironflow differs: Kafka gives you the log and leaves you to build everything on top — consumers, state management, workflow orchestration, projections, monitoring. Ironflow gives you the same append-only philosophy as a complete platform with workflows, projections, time-travel, and a dashboard.
| Ironflow | Kafka | |
|---|---|---|
| Setup | Single binary, <100ms boot | Multi-broker cluster + ZooKeeper/KRaft |
| Workflows | Built-in durable execution | Not available (need Flink, Temporal, etc.) |
| Projections | Built-in | Not available (need custom consumers) |
| Dashboard | Built-in | Not included (need Confluent, AKHQ, etc.) |
Choose Kafka if you need massive throughput event streaming for data pipelines. Choose Ironflow if you want the append-only philosophy as a complete platform.
vs. Restate
Section titled “vs. Restate”Restate starts from the distributed state machine. Ironflow starts from the recorded fact. Restate provides durable execution with an embedded state store, but events and execution remain separate concerns. In Ironflow, they’re the same history.
Restate offers durable execution with an elegant virtual object model. If you’re building distributed state machines with strong consistency, Restate is innovative.
Where Ironflow differs: Restate focuses on durable execution and state management but doesn’t provide event sourcing, projections, or unified entity timelines. Ironflow’s recorded-fact foundation means all of these emerge naturally.
| Ironflow | Restate | |
|---|---|---|
| Event sourcing | Built-in | Not available |
| Projections | Built-in | Not available |
| State model | Event-sourced (derived from history) | Virtual objects (mutable state) |
| Unified history | Events + execution in one timeline | Execution only |
Choose Restate if you want distributed virtual objects with strong consistency guarantees. Choose Ironflow if you want execution and events in one continuous history.
Key Differentiators
Section titled “Key Differentiators”1. One Binary, One History
Section titled “1. One Binary, One History”Unlike Temporal (multi-service cluster), Kurrent (dedicated event store + separate orchestrator), or Kafka (broker cluster + everything else), Ironflow is a single binary with everything unified.
- Cold boot to
/healthmeasured under 500ms in theTestBootTimebench (see Benchmarks) - Zero external dependencies for local dev (embedded NATS JetStream + SQLite)
- Events, execution, projections, and debugging in one process
2. Push + Pull Hybrid Mode
Section titled “2. Push + Pull Hybrid Mode”Most workflow engines are either “Push-only” (Inngest, Restate) or “Pull-only” (Temporal). Ironflow supports both simultaneously from the same codebase.
- Push Mode: Best for Next.js, Lambda, and serverless. Ironflow calls you.
- Pull Mode: Best for AI/ML, video processing, and long-running tasks. You pull from Ironflow via gRPC.
3. History Navigation (The “Workflow DVR”)
Section titled “3. History Navigation (The “Workflow DVR”)”Because every step is a recorded fact in the continuous history, Ironflow provides capabilities that emerge from the foundation:
- History Correction (hot patching): Edit a failed step’s output in production and resume execution.
- History Navigation (time-travel debugging): Scrub through any execution frame-by-frame.
- History Editing (scoped injection): Pause a running workflow, modify step data, resume.
- TUI Debugger: Step through production runs in your terminal.
- DAP Support: Attach VS Code to a production run ID.
Code Comparison
Section titled “Code Comparison”Ironflow (TypeScript)
Section titled “Ironflow (TypeScript)”import { createFunction } from "@ironflow/node";
const processOrder = createFunction( { id: "process-order", triggers: [{ event: "order.placed" }], recording: true, // Every step becomes a recorded fact }, async ({ event, step }) => { const payment = await step.run("charge", () => stripe.charge(event.data.total)); await step.sleep("wait-for-inventory", "5m"); return { success: true }; });Temporal (Go)
Section titled “Temporal (Go)”func OrderWorkflow(ctx workflow.Context, order Order) (OrderResult, error) { options := workflow.ActivityOptions{ StartToCloseTimeout: time.Minute } ctx = workflow.WithActivityOptions(ctx, options)
var result ChargeResult err := workflow.ExecuteActivity(ctx, ChargeActivity, order).Get(ctx, &result) if err != nil { return nil, err }
workflow.Sleep(ctx, 5*time.Minute) return OrderResult{Success: true}, nil}When to Choose Ironflow
Section titled “When to Choose Ironflow”- You want one history, not many: Events, workflow steps, and audit trails in one unified timeline per entity.
- You want to move fast: Durable execution without spending a week setting up a cluster.
- You need compliance as a byproduct: The audit trail is the system itself, not a separate layer to build.
- You are on the edge: Single Go binary for limited-resource environments.
- You are building an AI agent: Long-running Pull workers for LLM chains with Push simplicity for your web UI.
- You care about debugging: History navigation, correction, and editing built into the foundation.
Performance & Scale
Section titled “Performance & Scale”Published numbers come from the in-tree bench suite (make bench + k6 load tests). See Benchmarks for thresholds, methodology, and how to reproduce on your hardware.
| Metric | Ironflow | Temporal | Inngest | Hatchet | Restate |
|---|---|---|---|---|---|
| Setup Time | <1 minute (single binary) | 1 hour+ | 10 minutes | ~15 minutes (PG-bound) | 15 minutes |
| Boot Time | <500ms (TestBootTime warn threshold) | ~5s | N/A | ~20ms task start (server boot not published) | <1s |
| Load-test p95 | event-emission <500ms, mixed-workload <1s, function-invoke <10s | Not directly comparable | Cloud-managed | Not published | Not published |
| Max Concurrent Runs | Not yet published — depends on hardware + storage backend | Millions (Cluster) | Million+ (Cloud) | Not published | 50k+ (Single Node) |
| Memory Footprint | Not yet published | ~500MB+ | N/A (Cloud) | Not published | ~50MB |
Summary
Section titled “Summary”Ironflow is the first Continuous History platform — events and execution unified in one recorded history, shipped as a single binary.
Other platforms start from workflows (Temporal), function triggers (Inngest), event streams (Kurrent), or append-only logs (Kafka) and can add the missing pieces. But the result is always a bundle — separate systems coexisting in one product. The emergent capabilities (unified entity timelines, cross-concern time-travel, compliance-as-a-byproduct) don’t appear because the foundation isn’t unified. Ironflow’s foundation is.