Skip to content

Execution Modes

Ironflow supports two execution modes to handle different workload requirements:

ModeDescriptionBest For
PushIronflow sends HTTP requests to your endpointServerless functions, short tasks
PullYour worker connects and pulls jobs over HTTP polling (JS + Go). Go has a streaming code path available but the default entry point is polling.Long-running tasks, no timeout limits
EmbeddedPull-mode worker runs inside your app process (e.g., Next.js instrumentation.ts)Single-service deployments, prototyping

Push Mode (Default)

In push mode, Ironflow sends HTTP POST requests to your registered endpoint. This works well with serverless platforms like Next.js, Lambda, or any HTTP server.

// app/api/ironflow/route.ts (Next.js App Router)
import { serve } from "@ironflow/node";
import { processOrder } from "@/functions/process-order";
export const POST = serve({
functions: [processOrder],
signingKey: process.env.IRONFLOW_SIGNING_KEY,
});

How push mode works:

  1. Ironflow receives an event
  2. Ironflow makes an HTTP POST to your endpoint URL
  3. Your handler processes the request and returns a response
  4. Ironflow persists step results for durability

When to use push mode:

  • Serverless deployments (Vercel, AWS Lambda, Cloud Functions)
  • Tasks that complete within platform timeout limits
  • Stateless processing

Pull Mode

In pull mode, your worker connects to Ironflow and pulls jobs. The default transport uses HTTP polling (JS + Go); Go has a ConnectRPC bidirectional streaming code path available, but polling is the wired default. This eliminates timeout constraints and is ideal for long-running tasks.

import { createWorker } from "@ironflow/node";
import { generateVideo } from "./functions/generate-video";
const worker = createWorker({
serverUrl: "http://localhost:9123",
functions: [generateVideo],
maxConcurrentJobs: 4,
});
await worker.start();

Set mode: "pull" in your function config:

ironflow.createFunction({ id: "generate-video", mode: "pull", ... }, handler);

How pull mode works:

  1. Worker connects to Ironflow over HTTP polling (Go has a streaming path available, but polling is the default)
  2. Worker registers its functions
  3. When events arrive, Ironflow assigns jobs to connected workers
  4. Worker processes jobs and reports results
  5. Polling/connection stays open for continuous processing

When to use pull mode

Pull mode workers exist because push mode (HTTP POST to serverless) has real constraints: timeouts, cold starts, public endpoints, and no persistent state. Reach for a worker when any of these apply:

1. Long-running jobs (beyond your platform’s HTTP timeout) Video/audio processing, ML inference, PDF generation, large data exports, scientific compute. Serverless timeouts kill these — limits are platform-dependent (Vercel Hobby 10s, Vercel Pro 60s, Lambda 15min, etc.) — see your platform’s HTTP timeout. A worker runs as long as the job needs.

2. Private infrastructure access Workers run inside your VPC, data center, or on-prem environment and reach internal databases, legacy systems, and private APIs. You never need to expose an HTTP endpoint publicly — the worker dials out to Ironflow.

3. Heavy compute / GPU workloads Model training, embedding generation, video transcoding. Run on beefy hardware with GPUs that serverless platforms can’t provide.

4. Stateful workloads Warm caches, preloaded ML models, persistent DB connection pools, open Kafka consumers. Push mode cold-starts on every invocation; workers keep state resident.

5. Long-polling / streaming integrations Tail Kafka or Kinesis, maintain a WebSocket to a third party, consume SSE feeds. Workers stay connected as long as needed.

6. Projections and read-model builders Consume entity streams continuously to build materialized views. Inherently long-lived — a natural fit for workers.

7. Cost control at high volume For sustained high-throughput workloads, one always-on worker is cheaper than millions of serverless invocations.

8. Regulated or air-gapped environments When code must run on your hardware (compliance, data residency), the worker pulls jobs outbound — Ironflow never reaches in.

When NOT to use pull mode

Stick with push mode if:

  • Tasks complete within your platform’s HTTP timeout (Vercel Hobby 10s, Lambda 15min, etc.)
  • Traffic is bursty or sporadic (idle workers waste money)
  • Your team already deploys to Vercel / Next.js / Lambda and doesn’t want to run long-lived processes
  • You want the platform to handle scaling for you

Rule of thumb: outbound-only connection + long duration + stateful or private infrastructure = worker. Otherwise, use push mode.


Embedded Worker (Pull Mode in Next.js)

For simpler deployments, embed the pull-mode worker directly inside your Next.js application using the instrumentation.ts hook — no separate process needed.

instrumentation.ts
export async function register() {
if (process.env.NEXT_RUNTIME === "nodejs") {
const { createFunction, createWorker } = await import("@ironflow/node");
const myFunction = createFunction(
{ id: "my-function", triggers: [{ event: "my.event" }], recording: true },
async ({ event, step }) => {
return await step.run("process", async () => ({ done: true }));
},
);
const worker = createWorker({ functions: [myFunction] });
worker.start().catch(console.error);
}
}

The worker starts in the background when Next.js boots. Dynamic imports and the NEXT_RUNTIME guard keep Edge runtime compatible.

When to use embedded workers:

  • Single-service deployments (no separate worker process)
  • Prototyping and development
  • Apps with moderate throughput where a dedicated worker fleet isn’t justified

See examples/todo-web/ for a complete example.


Triggering Events

Events trigger workflow execution. Use the SDK client to emit events from your application:

import { createClient } from "@ironflow/node";
const client = createClient({
serverUrl: "http://localhost:9123",
});
// Fire and forget
await client.emit("order.placed", {
orderId: "ord_123",
total: 99.99,
});

The JS SDK client exposes both emit() (fire-and-forget) and emitSync(name, data, opts) — a typed wrapper over TriggerSync that blocks until the run completes.

Via cURL

Terminal window
curl -X POST http://localhost:9123/api/v1/events \
-H "Content-Type: application/json" \
-d '{
"name": "order.placed",
"data": {
"orderId": "ord_123",
"total": 99.99
}
}'

Mode Comparison

FeaturePush ModePull Mode
TimeoutBounded by the smaller of timeout: config (default 10min) and the host platform’s HTTP timeout (Vercel Hobby 10s, Pro 60s, Lambda 15min)Unlimited
ScalingPlatform handlesYou manage workers
Cold startsYesNo (persistent connection)
NetworkOutbound from IronflowOutbound from worker
Use caseShort tasks, serverlessLong tasks, GPU workloads

Real-Time Event Subscriptions

Subscribe to events in real-time using WebSocket-based subscriptions with NATS-style wildcard patterns.

Full documentation: See the Events & Pub/Sub guide for comprehensive SDK guides, error handling, reconnection patterns, and best practices.

Hybrid Event Model

Ironflow supports two parallel event processing layers:

LayerPurposeUse When
Function LayerDurable workflow execution with retries, steps, stateProcessing orders, sending emails, reliable webhooks
Event Stream LayerLightweight real-time distributionDashboards, monitoring, analytics, debugging

Events are published to the Event Stream immediately, regardless of whether functions match.

Quick Start

import { ironflow } from "@ironflow/browser";
ironflow.configure({ serverUrl: "http://localhost:9123" });
await ironflow.connect();
const sub = await ironflow.subscribe("system.run.*", {
onEvent: (event) => console.log(event),
onError: (error) => console.error(error.message),
replay: 10,
});
// Cleanup
sub.unsubscribe();
ironflow.disconnect();

Pattern Syntax

TokenDescriptionExample
*Matches one segmentsystem.run.*.created
>Matches one+ segments (end only)system.run.>

Common patterns: system.run.> (all runs), system.run.{id}.step.> (steps), events:> (user events)


What’s Next?