Skip to content

Self Host

This page covers the minimal single-process Docker setup. For multi-node Docker Compose with PostgreSQL and NATS clustering, see Docker Compose Deployment.

This guide walks through running Ironflow in production using Docker Compose with PostgreSQL.

Kubernetes alternative: For Kubernetes deployments, see the Small template in the Deployment Overview — it provides the same single-replica experience with Kubernetes management.


Quick Start

Get Ironflow running with PostgreSQL in about five minutes.

1. Create a .env file:

Terminal window
POSTGRES_USER=ironflow
POSTGRES_PASSWORD=change-me
POSTGRES_DB=ironflow
IRONFLOW_DATABASE_URL=postgres://ironflow:change-me@postgres:5432/ironflow?sslmode=disable

2. Start Ironflow and PostgreSQL:

Terminal window
docker compose --profile postgres up -d

This uses the postgres profile in the included docker-compose.yml, which starts both the ironflow and postgres services.

3. Verify Ironflow is running:

Terminal window
curl http://localhost:9123/health

You should see {"status":"healthy","timestamp":"...","version":"..."}. The dashboard is available at http://localhost:9123.


Comprehensive Guide

Prerequisites

Environment Configuration

Create a .env file in the same directory as your docker-compose.yml. The configuration varies by environment:

Terminal window
# .env — local development
POSTGRES_USER=ironflow
POSTGRES_PASSWORD=ironflow
POSTGRES_DB=ironflow
IRONFLOW_DATABASE_URL=postgres://ironflow:ironflow@postgres:5432/ironflow?sslmode=disable
NATS_STORE_DIR=/data/nats
# IRONFLOW_PORT is a Docker Compose variable for host port mapping only.
# Ironflow itself always listens on port 9123 (use --port to change).
LOG_LEVEL=debug

No master key needed — secrets are stored unencrypted for convenience.

See the Configuration reference for a full list of variables and their defaults.

Docker Compose Walkthrough

The docker-compose.yml in the Ironflow repository defines the orchestration. Note the use of Profiles to make PostgreSQL optional.

services:
ironflow:
image: ghcr.io/sahina/ironflow:${VERSION:-latest}
ports:
- "${IRONFLOW_PORT:-9123}:9123"
environment:
- LOG_LEVEL=${LOG_LEVEL:-info}
- IRONFLOW_DATABASE_URL=${IRONFLOW_DATABASE_URL:-}
- NATS_STORE_DIR=${NATS_STORE_DIR:-/data/nats}
- IRONFLOW_MASTER_KEY=${IRONFLOW_MASTER_KEY:-}
- IRONFLOW_ENV=${IRONFLOW_ENV:-default}
- IRONFLOW_METRICS_ENABLED=${IRONFLOW_METRICS_ENABLED:-true}
- IRONFLOW_OTEL_ENDPOINT=${IRONFLOW_OTEL_ENDPOINT:-}
- IRONFLOW_OTEL_SAMPLE_RATE=${IRONFLOW_OTEL_SAMPLE_RATE:-1.0}
- IRONFLOW_OTEL_SERVICE_NAME=${IRONFLOW_OTEL_SERVICE_NAME:-ironflow}
volumes:
- ironflow-data:/data # Persists NATS JetStream and SQLite data
depends_on:
postgres:
condition: service_healthy
required: false
healthcheck:
test: ["CMD", "wget", "-q", "--spider", "http://localhost:9123/health"]
interval: 10s
timeout: 5s
retries: 3
postgres:
image: postgres:16-alpine
profiles:
- postgres # Only starts when --profile postgres is passed
environment:
POSTGRES_USER: ${POSTGRES_USER:-ironflow}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-ironflow}
POSTGRES_DB: ${POSTGRES_DB:-ironflow}
ports:
- "127.0.0.1:5432:5432" # Bound to localhost only
volumes:
- postgres-data:/var/lib/postgresql/data
healthcheck:
test:
[
"CMD-SHELL",
"pg_isready -U ${POSTGRES_USER:-ironflow} -d ${POSTGRES_DB:-ironflow}",
]
interval: 5s
timeout: 5s
retries: 5
prometheus:
image: prom/prometheus:v3.2.1
profiles:
- monitoring # Only starts when --profile monitoring is passed
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml:ro
depends_on:
ironflow:
condition: service_healthy
volumes:
ironflow-data:
postgres-data:

Volume Permissions

Ironflow runs as a non-root user (appuser, UID 100 from Alpine’s adduser -S) for security. If you are using bind mounts instead of named Docker volumes, ensure the host directory is writable:

/data
mkdir -p ./data
chown -R 100:101 ./data

Starting and Stopping

Terminal window
# Start everything (Ironflow + PostgreSQL)
docker compose --profile postgres up -d
# View logs
docker compose logs -f ironflow
# Stop all services (data is preserved)
docker compose --profile postgres down
# Stop and delete all data (full reset)
docker compose --profile postgres down -v

Connecting Your SDK

Set IRONFLOW_SERVER_URL and IRONFLOW_API_KEY in your application environment:

// Server-side (Node.js)
import { createClient } from "@ironflow/node";
const client = createClient({
serverUrl: process.env.IRONFLOW_SERVER_URL || "http://localhost:9123",
apiKey: process.env.IRONFLOW_API_KEY,
});
// Browser-side
import { ironflow } from "@ironflow/browser";
ironflow.configure({
serverUrl: process.env.NEXT_PUBLIC_IRONFLOW_SERVER_URL || "http://localhost:9123",
auth: {
apiKey: process.env.NEXT_PUBLIC_IRONFLOW_API_KEY,
},
});

Initial Setup & Auth

Authentication is always on. On first boot, Ironflow auto-bootstraps your environment.

1. Retrieve Admin Credentials:

Check the server logs immediately after the first start to find your generated admin password and initial API key:

Terminal window
# Look for the "Bootstrap" log lines
# Example:
# ✓ Admin API Key: ifkey_a1b2c3d4e5f6...
# ✓ Admin Password: xxxxxxxxxxxx

2. Access the Dashboard:

Navigate to http://localhost:9123 and log in with:

  • Email: admin@ironflow.local
  • Password: (From the logs in step 1)

3. Create a New API Key:

The bootstrap process creates a default environment (env_default). Use the CLI or Dashboard to create a new, secure API key for your SDKs.

Terminal window
docker compose exec ironflow ironflow apikey create my-key

NATS Persistence

By default, the docker-compose.yml sets NATS_STORE_DIR=/data/nats and maps it to the ironflow-data volume. This ensures your recorded history — every step captured, replayable, rewindable — survives container restarts.

Monitoring with Prometheus

To enable Prometheus metrics scraping, set IRONFLOW_METRICS_ENABLED=true in your .env and start the monitoring profile:

Terminal window
docker compose --profile postgres --profile monitoring up -d

Prometheus will be available at http://localhost:9090 and will automatically scrape Ironflow’s /metrics endpoint every 15 seconds. See the Observability reference for available metrics and Grafana dashboard queries.

Upgrading

Pull the latest image and restart. Ironflow applies database migrations automatically on startup.

Terminal window
docker compose pull ironflow
docker compose --profile postgres up -d

Using an External PostgreSQL

If you’re using a managed service (Supabase, Neon, etc.), skip the --profile postgres flag and set IRONFLOW_DATABASE_URL in your .env:

Terminal window
# In .env
IRONFLOW_DATABASE_URL=postgres://user:pass@db.example.com:5432/ironflow?sslmode=require

Then start only the Ironflow service:

Terminal window
docker compose up -d

TLS for Cloud Databases

Cloud PostgreSQL providers usually require sslmode=require or sslmode=verify-full.