Self Host
This page covers the minimal single-process Docker setup. For multi-node Docker Compose with PostgreSQL and NATS clustering, see Docker Compose Deployment.
This guide walks through running Ironflow in production using Docker Compose with PostgreSQL.
Kubernetes alternative: For Kubernetes deployments, see the Small template in the Deployment Overview — it provides the same single-replica experience with Kubernetes management.
Quick Start
Get Ironflow running with PostgreSQL in about five minutes.
1. Create a .env file:
POSTGRES_USER=ironflowPOSTGRES_PASSWORD=change-mePOSTGRES_DB=ironflowIRONFLOW_DATABASE_URL=postgres://ironflow:change-me@postgres:5432/ironflow?sslmode=disable2. Start Ironflow and PostgreSQL:
docker compose --profile postgres up -dThis uses the postgres profile in the included docker-compose.yml, which starts both the ironflow and postgres services.
3. Verify Ironflow is running:
curl http://localhost:9123/healthYou should see {"status":"healthy","timestamp":"...","version":"..."}. The dashboard is available at http://localhost:9123.
Comprehensive Guide
Prerequisites
- Docker 24+
- Docker Compose v2 (
docker compose, notdocker-compose)
Environment Configuration
Create a .env file in the same directory as your docker-compose.yml. The configuration varies by environment:
# .env — local developmentPOSTGRES_USER=ironflowPOSTGRES_PASSWORD=ironflowPOSTGRES_DB=ironflowIRONFLOW_DATABASE_URL=postgres://ironflow:ironflow@postgres:5432/ironflow?sslmode=disable
NATS_STORE_DIR=/data/nats# IRONFLOW_PORT is a Docker Compose variable for host port mapping only.# Ironflow itself always listens on port 9123 (use --port to change).LOG_LEVEL=debugNo master key needed — secrets are stored unencrypted for convenience.
# .env — staging / testPOSTGRES_USER=ironflowPOSTGRES_PASSWORD=<generate-a-random-password>POSTGRES_DB=ironflowIRONFLOW_DATABASE_URL=postgres://ironflow:<password>@postgres:5432/ironflow?sslmode=disable
NATS_STORE_DIR=/data/nats# IRONFLOW_PORT is a Docker Compose variable for host port mapping only.# Ironflow itself always listens on port 9123 (use --port to change).LOG_LEVEL=info
# Encrypt secrets at restIRONFLOW_MASTER_KEY=<openssl rand -hex 32>
# Enable metrics for observabilityIRONFLOW_METRICS_ENABLED=true# .env — productionPOSTGRES_USER=ironflowPOSTGRES_PASSWORD=<strong-random-password>POSTGRES_DB=ironflowIRONFLOW_DATABASE_URL=postgres://ironflow:<password>@postgres:5432/ironflow?sslmode=disable
NATS_STORE_DIR=/data/nats# IRONFLOW_PORT is a Docker Compose variable for host port mapping only.# Ironflow itself always listens on port 9123 (use --port to change).LOG_LEVEL=warn
# Required: encrypt secrets at restIRONFLOW_MASTER_KEY=<openssl rand -hex 32>
# ObservabilityIRONFLOW_METRICS_ENABLED=trueIRONFLOW_OTEL_ENDPOINT=otel-collector:4317IRONFLOW_OTEL_SERVICE_NAME=ironflowFor production, consider using a managed PostgreSQL service instead of the bundled container. See Using an External PostgreSQL below.
See the Configuration reference for a full list of variables and their defaults.
Docker Compose Walkthrough
The docker-compose.yml in the Ironflow repository defines the orchestration. Note the use of Profiles to make PostgreSQL optional.
services: ironflow: image: ghcr.io/sahina/ironflow:${VERSION:-latest} ports: - "${IRONFLOW_PORT:-9123}:9123" environment: - LOG_LEVEL=${LOG_LEVEL:-info} - IRONFLOW_DATABASE_URL=${IRONFLOW_DATABASE_URL:-} - NATS_STORE_DIR=${NATS_STORE_DIR:-/data/nats} - IRONFLOW_MASTER_KEY=${IRONFLOW_MASTER_KEY:-} - IRONFLOW_ENV=${IRONFLOW_ENV:-default} - IRONFLOW_METRICS_ENABLED=${IRONFLOW_METRICS_ENABLED:-true} - IRONFLOW_OTEL_ENDPOINT=${IRONFLOW_OTEL_ENDPOINT:-} - IRONFLOW_OTEL_SAMPLE_RATE=${IRONFLOW_OTEL_SAMPLE_RATE:-1.0} - IRONFLOW_OTEL_SERVICE_NAME=${IRONFLOW_OTEL_SERVICE_NAME:-ironflow} volumes: - ironflow-data:/data # Persists NATS JetStream and SQLite data depends_on: postgres: condition: service_healthy required: false healthcheck: test: ["CMD", "wget", "-q", "--spider", "http://localhost:9123/health"] interval: 10s timeout: 5s retries: 3
postgres: image: postgres:16-alpine profiles: - postgres # Only starts when --profile postgres is passed environment: POSTGRES_USER: ${POSTGRES_USER:-ironflow} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-ironflow} POSTGRES_DB: ${POSTGRES_DB:-ironflow} ports: - "127.0.0.1:5432:5432" # Bound to localhost only volumes: - postgres-data:/var/lib/postgresql/data healthcheck: test: [ "CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-ironflow} -d ${POSTGRES_DB:-ironflow}", ] interval: 5s timeout: 5s retries: 5
prometheus: image: prom/prometheus:v3.2.1 profiles: - monitoring # Only starts when --profile monitoring is passed ports: - "9090:9090" volumes: - ./prometheus.yml:/etc/prometheus/prometheus.yml:ro depends_on: ironflow: condition: service_healthy
volumes: ironflow-data: postgres-data:Volume Permissions
Ironflow runs as a non-root user (appuser, UID 100 from Alpine’s adduser -S) for security. If you are using bind mounts instead of named Docker volumes, ensure the host directory is writable:
mkdir -p ./datachown -R 100:101 ./dataStarting and Stopping
# Start everything (Ironflow + PostgreSQL)docker compose --profile postgres up -d
# View logsdocker compose logs -f ironflow
# Stop all services (data is preserved)docker compose --profile postgres down
# Stop and delete all data (full reset)docker compose --profile postgres down -vConnecting Your SDK
Set IRONFLOW_SERVER_URL and IRONFLOW_API_KEY in your application environment:
// Server-side (Node.js)import { createClient } from "@ironflow/node";
const client = createClient({ serverUrl: process.env.IRONFLOW_SERVER_URL || "http://localhost:9123", apiKey: process.env.IRONFLOW_API_KEY,});
// Browser-sideimport { ironflow } from "@ironflow/browser";
ironflow.configure({ serverUrl: process.env.NEXT_PUBLIC_IRONFLOW_SERVER_URL || "http://localhost:9123", auth: { apiKey: process.env.NEXT_PUBLIC_IRONFLOW_API_KEY, },});import "github.com/sahina/ironflow/sdk/go/ironflow"
client := ironflow.NewClient(ironflow.ClientConfig{ ServerURL: os.Getenv("IRONFLOW_SERVER_URL"), APIKey: os.Getenv("IRONFLOW_API_KEY"),})Initial Setup & Auth
Authentication is always on. On first boot, Ironflow auto-bootstraps your environment.
1. Retrieve Admin Credentials:
Check the server logs immediately after the first start to find your generated admin password and initial API key:
# Look for the "Bootstrap" log lines# Example:# ✓ Admin API Key: ifkey_a1b2c3d4e5f6...# ✓ Admin Password: xxxxxxxxxxxx2. Access the Dashboard:
Navigate to http://localhost:9123 and log in with:
- Email:
admin@ironflow.local - Password: (From the logs in step 1)
3. Create a New API Key:
The bootstrap process creates a default environment (env_default). Use the CLI or Dashboard to create a new, secure API key for your SDKs.
docker compose exec ironflow ironflow apikey create my-keyNATS Persistence
By default, the docker-compose.yml sets NATS_STORE_DIR=/data/nats and maps it to the ironflow-data volume. This ensures your recorded history — every step captured, replayable, rewindable — survives container restarts.
Monitoring with Prometheus
To enable Prometheus metrics scraping, set IRONFLOW_METRICS_ENABLED=true in your .env and start the monitoring profile:
docker compose --profile postgres --profile monitoring up -dPrometheus will be available at http://localhost:9090 and will automatically scrape Ironflow’s /metrics endpoint every 15 seconds. See the Observability reference for available metrics and Grafana dashboard queries.
Upgrading
Pull the latest image and restart. Ironflow applies database migrations automatically on startup.
docker compose pull ironflowdocker compose --profile postgres up -dUsing an External PostgreSQL
If you’re using a managed service (Supabase, Neon, etc.), skip the --profile postgres flag and set IRONFLOW_DATABASE_URL in your .env:
# In .envIRONFLOW_DATABASE_URL=postgres://user:pass@db.example.com:5432/ironflow?sslmode=requireThen start only the Ironflow service:
docker compose up -dTLS for Cloud Databases
Cloud PostgreSQL providers usually require sslmode=require or sslmode=verify-full.