Execution Modes
Ironflow supports two execution modes to handle different workload requirements:
| Mode | Description | Best For |
|---|---|---|
| Push | Ironflow sends HTTP requests to your endpoint | Serverless functions, short tasks |
| Pull | Your worker connects and pulls jobs over HTTP polling (JS + Go). Go has a streaming code path available but the default entry point is polling. | Long-running tasks, no timeout limits |
| Embedded | Pull-mode worker runs inside your app process (e.g., Next.js instrumentation.ts) | Single-service deployments, prototyping |
Push Mode (Default)
In push mode, Ironflow sends HTTP POST requests to your registered endpoint. This works well with serverless platforms like Next.js, Lambda, or any HTTP server.
// app/api/ironflow/route.ts (Next.js App Router)import { serve } from "@ironflow/node";import { processOrder } from "@/functions/process-order";
export const POST = serve({ functions: [processOrder], signingKey: process.env.IRONFLOW_SIGNING_KEY,});import "github.com/sahina/ironflow/sdk/go/ironflow"
func main() { handler := ironflow.Serve(ironflow.ServeConfig{ Functions: []ironflow.Function{ProcessOrder}, SigningKey: os.Getenv("IRONFLOW_SIGNING_KEY"), }) http.Handle("/api/ironflow", handler) http.ListenAndServe(":3000", nil)}How push mode works:
- Ironflow receives an event
- Ironflow makes an HTTP POST to your endpoint URL
- Your handler processes the request and returns a response
- Ironflow persists step results for durability
When to use push mode:
- Serverless deployments (Vercel, AWS Lambda, Cloud Functions)
- Tasks that complete within platform timeout limits
- Stateless processing
Pull Mode
In pull mode, your worker connects to Ironflow and pulls jobs. The default transport uses HTTP polling (JS + Go); Go has a ConnectRPC bidirectional streaming code path available, but polling is the wired default. This eliminates timeout constraints and is ideal for long-running tasks.
import { createWorker } from "@ironflow/node";import { generateVideo } from "./functions/generate-video";
const worker = createWorker({ serverUrl: "http://localhost:9123", functions: [generateVideo], maxConcurrentJobs: 4,});
await worker.start();Set mode: "pull" in your function config:
ironflow.createFunction({ id: "generate-video", mode: "pull", ... }, handler);func main() { worker := ironflow.NewWorker(ironflow.WorkerConfig{ ServerURL: "http://localhost:9123", Functions: []ironflow.Function{GenerateVideo}, MaxConcurrentJobs: 4, })
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGTERM) defer cancel()
if err := worker.Run(ctx); err != nil { log.Fatal(err) }}Set Mode: "pull" in your function config:
ironflow.CreateFunction(ironflow.FunctionConfig{ID: "generate-video", Mode: "pull", ...}, handler)How pull mode works:
- Worker connects to Ironflow over HTTP polling (Go has a streaming path available, but polling is the default)
- Worker registers its functions
- When events arrive, Ironflow assigns jobs to connected workers
- Worker processes jobs and reports results
- Polling/connection stays open for continuous processing
When to use pull mode
Pull mode workers exist because push mode (HTTP POST to serverless) has real constraints: timeouts, cold starts, public endpoints, and no persistent state. Reach for a worker when any of these apply:
1. Long-running jobs (beyond your platform’s HTTP timeout) Video/audio processing, ML inference, PDF generation, large data exports, scientific compute. Serverless timeouts kill these — limits are platform-dependent (Vercel Hobby 10s, Vercel Pro 60s, Lambda 15min, etc.) — see your platform’s HTTP timeout. A worker runs as long as the job needs.
2. Private infrastructure access Workers run inside your VPC, data center, or on-prem environment and reach internal databases, legacy systems, and private APIs. You never need to expose an HTTP endpoint publicly — the worker dials out to Ironflow.
3. Heavy compute / GPU workloads Model training, embedding generation, video transcoding. Run on beefy hardware with GPUs that serverless platforms can’t provide.
4. Stateful workloads Warm caches, preloaded ML models, persistent DB connection pools, open Kafka consumers. Push mode cold-starts on every invocation; workers keep state resident.
5. Long-polling / streaming integrations Tail Kafka or Kinesis, maintain a WebSocket to a third party, consume SSE feeds. Workers stay connected as long as needed.
6. Projections and read-model builders Consume entity streams continuously to build materialized views. Inherently long-lived — a natural fit for workers.
7. Cost control at high volume For sustained high-throughput workloads, one always-on worker is cheaper than millions of serverless invocations.
8. Regulated or air-gapped environments When code must run on your hardware (compliance, data residency), the worker pulls jobs outbound — Ironflow never reaches in.
When NOT to use pull mode
Stick with push mode if:
- Tasks complete within your platform’s HTTP timeout (Vercel Hobby 10s, Lambda 15min, etc.)
- Traffic is bursty or sporadic (idle workers waste money)
- Your team already deploys to Vercel / Next.js / Lambda and doesn’t want to run long-lived processes
- You want the platform to handle scaling for you
Rule of thumb: outbound-only connection + long duration + stateful or private infrastructure = worker. Otherwise, use push mode.
Embedded Worker (Pull Mode in Next.js)
For simpler deployments, embed the pull-mode worker directly inside your Next.js application using the instrumentation.ts hook — no separate process needed.
export async function register() { if (process.env.NEXT_RUNTIME === "nodejs") { const { createFunction, createWorker } = await import("@ironflow/node");
const myFunction = createFunction( { id: "my-function", triggers: [{ event: "my.event" }], recording: true }, async ({ event, step }) => { return await step.run("process", async () => ({ done: true })); }, );
const worker = createWorker({ functions: [myFunction] }); worker.start().catch(console.error); }}The worker starts in the background when Next.js boots. Dynamic imports and the NEXT_RUNTIME guard keep Edge runtime compatible.
When to use embedded workers:
- Single-service deployments (no separate worker process)
- Prototyping and development
- Apps with moderate throughput where a dedicated worker fleet isn’t justified
See
examples/todo-web/for a complete example.
Triggering Events
Events trigger workflow execution. Use the SDK client to emit events from your application:
import { createClient } from "@ironflow/node";
const client = createClient({ serverUrl: "http://localhost:9123",});
// Fire and forgetawait client.emit("order.placed", { orderId: "ord_123", total: 99.99,});The JS SDK client exposes both emit() (fire-and-forget) and emitSync(name, data, opts) — a typed wrapper over TriggerSync that blocks until the run completes.
client := ironflow.NewClient(ironflow.ClientConfig{ ServerURL: "http://localhost:9123",})
client.Emit(ctx, "order.placed", map[string]any{ "orderId": "ord_123", "total": 99.99,})Via cURL
curl -X POST http://localhost:9123/api/v1/events \ -H "Content-Type: application/json" \ -d '{ "name": "order.placed", "data": { "orderId": "ord_123", "total": 99.99 } }'Mode Comparison
| Feature | Push Mode | Pull Mode |
|---|---|---|
| Timeout | Bounded by the smaller of timeout: config (default 10min) and the host platform’s HTTP timeout (Vercel Hobby 10s, Pro 60s, Lambda 15min) | Unlimited |
| Scaling | Platform handles | You manage workers |
| Cold starts | Yes | No (persistent connection) |
| Network | Outbound from Ironflow | Outbound from worker |
| Use case | Short tasks, serverless | Long tasks, GPU workloads |
Real-Time Event Subscriptions
Subscribe to events in real-time using WebSocket-based subscriptions with NATS-style wildcard patterns.
Full documentation: See the Events & Pub/Sub guide for comprehensive SDK guides, error handling, reconnection patterns, and best practices.
Hybrid Event Model
Ironflow supports two parallel event processing layers:
| Layer | Purpose | Use When |
|---|---|---|
| Function Layer | Durable workflow execution with retries, steps, state | Processing orders, sending emails, reliable webhooks |
| Event Stream Layer | Lightweight real-time distribution | Dashboards, monitoring, analytics, debugging |
Events are published to the Event Stream immediately, regardless of whether functions match.
Quick Start
import { ironflow } from "@ironflow/browser";
ironflow.configure({ serverUrl: "http://localhost:9123" });
await ironflow.connect();
const sub = await ironflow.subscribe("system.run.*", { onEvent: (event) => console.log(event), onError: (error) => console.error(error.message), replay: 10,});
// Cleanupsub.unsubscribe();ironflow.disconnect();subClient := ironflow.NewSubscriptionClient(ironflow.SubscriptionClientConfig{ WSURL: "ws://localhost:9123/ws",})
ctx := context.Background()if err := subClient.Connect(ctx); err != nil { log.Fatal(err)}
sub, _ := subClient.Subscribe(ctx, ironflow.Patterns.AllRuns(), &ironflow.SubscribeOptions{ Replay: 5,})
for event := range sub.Events() { fmt.Printf("%s: %s\n", event.Topic, string(event.Data))}./build/ironflow subscribe "system.run.>" --replay 10Pattern Syntax
| Token | Description | Example |
|---|---|---|
* | Matches one segment | system.run.*.created |
> | Matches one+ segments (end only) | system.run.> |
Common patterns: system.run.> (all runs), system.run.{id}.step.> (steps), events:> (user events)
What’s Next?
- Error Handling — Handle errors with NonRetryableError
- Debugging — Hot patching, TUI debugger, VS Code DAP