Skip to content

Comparison

Last verified: April 2026. Vendor facts decay — please file an issue if anything below is out of date.

Every comparison below answers the same question: where does the product start?

The starting point determines what’s native and what’s bolted on. Ironflow starts from the recorded fact — every event, every workflow step, every state change is the same kind of thing in one continuous history. Competitors start from workflows, event streams, or function triggers. They can add the other capabilities, but the foundation determines which ones emerge naturally and which feel stapled on.


FeatureIronflowTemporalInngestHatchetRestateKurrentKafka
Starting PointThe recorded factThe workflowThe function triggerThe durable taskThe state machineThe event streamThe append-only log
ArchitectureSingle Binary (Embedded NATS)Cluster (Server + DB + ES + UI)Managed Cloud / SidecarServer binary + PostgreSQLCluster / ProxyCluster (Server + DB)Cluster (Brokers + ZK)
Unified HistoryNative — events and execution in one timelineNo — separate systemsNo — events are signalsNo — execution event log onlyNo — state machines onlyPartial — events only, no executionNo — log only, assemble everything
Local DXZero-ConfigDocker Compose requiredDocker or Cloud requiredPostgreSQL required (Docker)Docker requiredDocker requiredDocker Compose required
ExecutionPush (HTTP) & Pull (gRPC)Pull (gRPC)Push (HTTP)Pull (gRPC)Push (HTTP)N/A (no execution)N/A (no execution)
State StoreSQLite / PostgresMySQL / PG / CassandraManaged (Proprietary)PostgreSQL onlyEmbedded RocksDBEvent StoreTopic partitions
Learning CurveLowHigh (Complex primitives)LowLowMediumMedium (ES concepts)High (Ops-heavy)

Temporal starts from the workflow. Ironflow starts from the recorded fact. Temporal can add event storage, but workflow steps and domain events will always be separate systems that happen to coexist — a bundle. In Ironflow, a workflow step completing IS a recorded fact, the same kind of thing as a domain event. That’s why you can query “show me everything that happened to order-123” and see both in one timeline. That capability emerges from the foundation — you can’t bolt it on.

Temporal is the gold standard for durable execution at enterprise scale. If you need millions of concurrent workflows across a distributed cluster, Temporal is battle-tested.

Where Ironflow differs: Temporal treats workflows and events as separate concerns. You can emit signals and queries, but the workflow history and your domain events live in different systems. In Ironflow, they’re the same history — which is why history navigation (time-travel debugging), history correction (hot patching), and unified entity timelines work out of the box.

IronflowTemporal
Setupironflow servetemporal server start-dev or Docker Compose (cluster)
Unified historyEvents + execution in one timelineWorkflow history only
DebuggingHistory navigation, correction, editingReplay-based, Web UI
Scale modelSingle node (100k+ runs)Distributed cluster (millions)

Choose Temporal if you need massive distributed scale with proven enterprise support. Choose Ironflow if you want events and execution in one continuous history with zero infrastructure.


Inngest starts from the function trigger. Events kick off workflows, but the events are signals — they disappear after delivery. In Ironflow, events are permanent recorded facts. Workflow steps are permanent recorded facts. The trigger and the execution live in the same history. Even if Inngest added event persistence, those would be two separate systems sharing a product, not one unified history.

Inngest delivers excellent developer experience for serverless workflows. If you’re building on Vercel/Next.js and want managed infrastructure, Inngest is polished.

Where Ironflow differs: Inngest treats events as triggers — they start functions then disappear. There’s no event history to query, no projections to derive, no entity timeline to browse. Ironflow treats events as permanent facts that power projections, entity streams, and time-travel.

IronflowInngest
EventsPermanent recorded factsEphemeral triggers
Event sourcingNative (entity streams, projections)Not available
HostingSelf-hosted (single binary)Managed cloud
Execution modesPush + PullPush only

Choose Inngest if you want a managed cloud service with zero ops for serverless functions. Choose Ironflow if you want your events to be permanent history, not disposable triggers.


Hatchet starts from the durable task. Workers pull tasks from PostgreSQL, every checkpoint persists, and the dashboard replays the execution log. It’s a sharper Temporal: same shape, simpler ops, MIT-only. But the durability is execution-shaped, not domain-shaped — if you want your domain events and your workflow steps in the same history, Hatchet keeps them in different stores.

Hatchet is the cleanest “Temporal but Postgres-only” pitch in the market. If your team already runs Postgres, doesn’t need a separate broker, and wants a familiar workflow-engine model, Hatchet is sharp.

Where Ironflow differs: Hatchet’s “durable” means the execution event log — task started, task completed, retry attempted. Ironflow’s “durable” means the recorded fact — your domain events and your workflow steps in one continuous history. That’s why entity timelines, projections, and time-travel across workflows + events all work in Ironflow and don’t exist in Hatchet.

IronflowHatchet
StorageSQLite (dev) / Postgres (prod) + embedded NATSPostgreSQL only (PG is also the queue)
Durability scopeDomain events + execution steps in one historyExecution event log only
Execution modesPush + PullPull only
Agent surfaceagent() handler with ctx-injected tool / llm / approve / memory / spawn + MCP server (ironflow mcp)Generic tasks (agents are use case, not API)
History toolsTime-travel, hot patching, scoped injection, TUI/DAPDashboard replay

Choose Hatchet if you want a sharper Temporal with Postgres-only operational simplicity for plain task workloads. Choose Ironflow if you want events and execution in one history, agent-native primitives, and history navigation/correction built in.


Kurrent starts from the event stream. Ironflow starts from the recorded fact. Kurrent records your data changes — and they’re adding SQL projections. But your workflow execution, your authorization decisions, your debugging sessions will always live somewhere else. In Ironflow, those are all recorded facts in the same history. Cross-concern time-travel and compliance-as-a-byproduct emerge from that unification.

Kurrent is the pioneer of event sourcing databases. If you need a dedicated, high-performance event store with mature subscription APIs, Kurrent is the original.

Where Ironflow differs: Kurrent stores your domain events but doesn’t execute workflows. You need a separate orchestrator (Temporal, MassTransit, etc.) for durable execution. In Ironflow, workflow steps and domain events are both recorded facts in the same history — so you can see everything that happened to an entity in one view.

IronflowKurrent
Event storageBuilt-in (entity streams)Dedicated event store
Workflow executionBuilt-in (durable, memoized)Not available (need separate tool)
ProjectionsBuilt-in (managed + external)Built-in (JS projections, adding SQL)
Unified historyEvents + execution togetherEvents only

Choose Kurrent if you need a dedicated event store with mature clustering and subscription APIs. Choose Ironflow if you want event sourcing and workflow execution unified in one history.


Kafka gives you the philosophy (append-only log) without the platform. You get the foundation but you assemble everything yourself — consumers, projections, workflow orchestration, monitoring. Ironflow gives you the same philosophy as a complete platform. Same starting point, none of the assembly.

Kafka is the industry standard for high-throughput event streaming. If you need to process millions of events per second across a distributed cluster, Kafka is proven.

Where Ironflow differs: Kafka gives you the log and leaves you to build everything on top — consumers, state management, workflow orchestration, projections, monitoring. Ironflow gives you the same append-only philosophy as a complete platform with workflows, projections, time-travel, and a dashboard.

IronflowKafka
SetupSingle binary, <100ms bootMulti-broker cluster + ZooKeeper/KRaft
WorkflowsBuilt-in durable executionNot available (need Flink, Temporal, etc.)
ProjectionsBuilt-inNot available (need custom consumers)
DashboardBuilt-inNot included (need Confluent, AKHQ, etc.)

Choose Kafka if you need massive throughput event streaming for data pipelines. Choose Ironflow if you want the append-only philosophy as a complete platform.


Restate starts from the distributed state machine. Ironflow starts from the recorded fact. Restate provides durable execution with an embedded state store, but events and execution remain separate concerns. In Ironflow, they’re the same history.

Restate offers durable execution with an elegant virtual object model. If you’re building distributed state machines with strong consistency, Restate is innovative.

Where Ironflow differs: Restate focuses on durable execution and state management but doesn’t provide event sourcing, projections, or unified entity timelines. Ironflow’s recorded-fact foundation means all of these emerge naturally.

IronflowRestate
Event sourcingBuilt-inNot available
ProjectionsBuilt-inNot available
State modelEvent-sourced (derived from history)Virtual objects (mutable state)
Unified historyEvents + execution in one timelineExecution only

Choose Restate if you want distributed virtual objects with strong consistency guarantees. Choose Ironflow if you want execution and events in one continuous history.


Unlike Temporal (multi-service cluster), Kurrent (dedicated event store + separate orchestrator), or Kafka (broker cluster + everything else), Ironflow is a single binary with everything unified.

  • Cold boot to /health measured under 500ms in the TestBootTime bench (see Benchmarks)
  • Zero external dependencies for local dev (embedded NATS JetStream + SQLite)
  • Events, execution, projections, and debugging in one process

Most workflow engines are either “Push-only” (Inngest, Restate) or “Pull-only” (Temporal). Ironflow supports both simultaneously from the same codebase.

  • Push Mode: Best for Next.js, Lambda, and serverless. Ironflow calls you.
  • Pull Mode: Best for AI/ML, video processing, and long-running tasks. You pull from Ironflow via gRPC.

3. History Navigation (The “Workflow DVR”)

Section titled “3. History Navigation (The “Workflow DVR”)”

Because every step is a recorded fact in the continuous history, Ironflow provides capabilities that emerge from the foundation:

  • History Correction (hot patching): Edit a failed step’s output in production and resume execution.
  • History Navigation (time-travel debugging): Scrub through any execution frame-by-frame.
  • History Editing (scoped injection): Pause a running workflow, modify step data, resume.
  • TUI Debugger: Step through production runs in your terminal.
  • DAP Support: Attach VS Code to a production run ID.

import { createFunction } from "@ironflow/node";
const processOrder = createFunction(
{
id: "process-order",
triggers: [{ event: "order.placed" }],
recording: true, // Every step becomes a recorded fact
},
async ({ event, step }) => {
const payment = await step.run("charge", () => stripe.charge(event.data.total));
await step.sleep("wait-for-inventory", "5m");
return { success: true };
}
);
func OrderWorkflow(ctx workflow.Context, order Order) (OrderResult, error) {
options := workflow.ActivityOptions{ StartToCloseTimeout: time.Minute }
ctx = workflow.WithActivityOptions(ctx, options)
var result ChargeResult
err := workflow.ExecuteActivity(ctx, ChargeActivity, order).Get(ctx, &result)
if err != nil { return nil, err }
workflow.Sleep(ctx, 5*time.Minute)
return OrderResult{Success: true}, nil
}

  • You want one history, not many: Events, workflow steps, and audit trails in one unified timeline per entity.
  • You want to move fast: Durable execution without spending a week setting up a cluster.
  • You need compliance as a byproduct: The audit trail is the system itself, not a separate layer to build.
  • You are on the edge: Single Go binary for limited-resource environments.
  • You are building an AI agent: Long-running Pull workers for LLM chains with Push simplicity for your web UI.
  • You care about debugging: History navigation, correction, and editing built into the foundation.

Published numbers come from the in-tree bench suite (make bench + k6 load tests). See Benchmarks for thresholds, methodology, and how to reproduce on your hardware.

MetricIronflowTemporalInngestHatchetRestate
Setup Time<1 minute (single binary)1 hour+10 minutes~15 minutes (PG-bound)15 minutes
Boot Time<500ms (TestBootTime warn threshold)~5sN/A~20ms task start (server boot not published)<1s
Load-test p95event-emission <500ms, mixed-workload <1s, function-invoke <10sNot directly comparableCloud-managedNot publishedNot published
Max Concurrent RunsNot yet published — depends on hardware + storage backendMillions (Cluster)Million+ (Cloud)Not published50k+ (Single Node)
Memory FootprintNot yet published~500MB+N/A (Cloud)Not published~50MB

Ironflow is the first Continuous History platform — events and execution unified in one recorded history, shipped as a single binary.

Other platforms start from workflows (Temporal), function triggers (Inngest), event streams (Kurrent), or append-only logs (Kafka) and can add the missing pieces. But the result is always a bundle — separate systems coexisting in one product. The emergent capabilities (unified entity timelines, cross-concern time-travel, compliance-as-a-byproduct) don’t appear because the foundation isn’t unified. Ironflow’s foundation is.