Skip to content

YAML Configuration

Ironflow can be configured with a declarative YAML file instead of CLI flags and environment variables. One file describes your entire server infrastructure — from a local dev setup to a multi-tenant platform.

Without YAML, configuring Ironflow means combining CLI flags, environment variables, and docker-compose files. The configuration is scattered and hard to reproduce. With ironflow.yaml, your entire infrastructure is a single, version-controlled file.

Without YAMLWith YAML
--port 9123 --nats-url nats://h1:4222 --node-id node-1 + IRONFLOW_DATABASE_URL=... IRONFLOW_MASTER_KEY=...ironflow serve -f ironflow.yaml
Configuration scattered across flags, env vars, compose filesSingle file, checked into git
Easy to forget a flag when deployingReproducible on every boot

Use ironflow config init to generate a starter YAML for your deployment tier:

Terminal window
# Local development (minimal, SQLite, dev mode)
ironflow config init > ironflow.yaml
# Production single-node (PostgreSQL, observability)
ironflow config init --prod > ironflow.yaml
# Multi-node cluster
ironflow config init --kind cluster > ironflow.yaml
# Multi-tenant platform
ironflow config init --kind platform > ironflow.yaml

Every ironflow.yaml has a kind that determines which features and validations apply:

KindUse CaseRequirements
ServerLocal dev, single-node productionNone (SQLite + embedded NATS by default)
ClusterMulti-node horizontal scalingPostgreSQL + external NATS + stable nodeId
PlatformMulti-tenant SaaSEverything in Cluster + organizations

The kind you choose determines what fields are required. Start with Server and graduate to Cluster or Platform as your deployment grows.


The simplest configuration — 5 lines to start:

apiVersion: ironflow/v1
kind: Server
spec:
port: 9123
auth:
devMode: true
Terminal window
ironflow serve -f ironflow.yaml

This boots Ironflow with embedded NATS (memory mode), SQLite, and authentication disabled. Similar to ironflow serve --dev, though the --dev flag also tightens crash-resume timings for faster local iteration.


Add PostgreSQL, secrets encryption, and observability:

apiVersion: ironflow/v1
kind: Server
spec:
port: 9123
storage:
driver: postgres
url: ${IRONFLOW_DATABASE_URL}
pool:
maxConns: 25
minConns: 5
maxIdleTime: 30s
nats:
storeDir: /data/nats
auth:
masterKey: ${IRONFLOW_MASTER_KEY}
jwtSecret: ${IRONFLOW_JWT_SECRET} # optional — auto-generated if empty
observability:
tracing:
endpoint: otel-collector:4317
sampleRate: 1.0
metrics:
enabled: true

Notice the ${IRONFLOW_DATABASE_URL} syntax — secrets are never stored in the file. See Environment Variable References below.


Scale horizontally with shared PostgreSQL and external NATS. Deploy the same file on every node — only nodeId differs (set via env var per node):

apiVersion: ironflow/v1
kind: Cluster
spec:
port: 9123
storage:
driver: postgres
url: ${IRONFLOW_DATABASE_URL}
pool:
maxConns: 25
minConns: 5
nats:
url: ${NATS_URL}
# credentials: /path/to/nats.creds
cluster:
nodeId: ${IRONFLOW_NODE_ID}
staleClaimThreshold: 2m
auth:
masterKey: ${IRONFLOW_MASTER_KEY}
jwtSecret: ${IRONFLOW_JWT_SECRET} # optional — auto-generated if empty
observability:
tracing:
endpoint: otel-collector:4317
sampleRate: 1.0
metrics:
enabled: true

Cluster kind enforces three rules at startup:

  1. PostgreSQL required — SQLite does not support SKIP LOCKED for distributed scheduling
  2. External NATS required — embedded NATS is single-process only
  3. Stable nodeId required — used for claim ownership and log correlation

See Docker Compose Deployment for the full clustering guide.


Declare organizations, projects, and environments. Ironflow creates them on first boot and is idempotent on subsequent boots:

apiVersion: ironflow/v1
kind: Platform
spec:
port: 9123
storage:
driver: postgres
url: ${IRONFLOW_DATABASE_URL}
nats:
url: ${NATS_URL}
cluster:
nodeId: ${IRONFLOW_NODE_ID}
auth:
masterKey: ${IRONFLOW_MASTER_KEY}
jwtSecret: ${IRONFLOW_JWT_SECRET} # optional — auto-generated if empty
platform:
organizations:
- name: acme-corp
projects:
- name: payments
environments:
- name: development
- name: staging
- name: production
- name: orders
environments:
- name: development
- name: production
- name: globex
projects:
- name: logistics
environments:
- name: production

Platform inherits all Cluster requirements and adds:

  • At least one organization — the platform must have tenants
  • Unique names — duplicate org, project, or environment names within a parent are rejected

The default organization (org_default) and its project/environment are always created first, regardless of what’s in the YAML. Your Platform organizations are created on top.


The YAML file is safe to commit to git because secrets use ${VAR} references that resolve at boot time:

SyntaxBehavior
${VAR}Required — startup fails with a clear error if not set
${VAR:-default}Optional — uses the default value if the env var is not set
postgres://literalLiteral string — used as-is (no ${} means no resolution)

Example:

storage:
url: ${IRONFLOW_DATABASE_URL} # required — error if unset
nats:
maxMemory: ${NATS_MAX_MEM:-256MB} # optional — defaults to 256MB
auth:
masterKey: ${IRONFLOW_MASTER_KEY} # required — error if unset

If an env var is not set and has no default, the error message tells you exactly which field and variable:

config file error: environment variable ${IRONFLOW_DATABASE_URL} is not set

Check your YAML for errors without starting the server:

Terminal window
ironflow validate -f ironflow.yaml

This parses the file, resolves ${VAR} references, runs kind-specific validation, and prints a summary:

Validating ironflow.yaml...
Kind: Cluster
Storage: postgres (postgres://localhost/iron...)
NATS: external (nats://h1:4222,h2:4222)
Auth: enabled (master key set)
Tracing: otel-collector:4317
Metrics: enabled (/metrics)
Node ID: node-1
✓ Valid. Ready to boot with: ironflow serve -f ironflow.yaml

Exit code 0 means valid, exit code 1 means invalid.


CLI flags always take precedence over YAML values. This lets you use a shared YAML file but override specific settings per-node or per-environment:

Terminal window
# YAML says port 9123, but override to 8080 for this node
ironflow serve -f ironflow.yaml --port 8080
# YAML says dev mode is off, but enable it locally
ironflow serve -f ironflow.yaml --dev

Only flags you explicitly pass override the YAML. Unset flags preserve the YAML values.


If you don’t use -f, nothing changes. All existing flags and environment variables work exactly as before:

Terminal window
# These still work — no YAML needed
ironflow serve
ironflow serve --port 9000
ironflow serve --nats-url nats://h1:4222 --node-id node-1
IRONFLOW_DATABASE_URL="postgres://..." ironflow serve

The -f flag is opt-in. You can adopt YAML incrementally.


FieldTypeDefaultSupports ${VAR}Notes
apiVersionstringNoMust be ironflow/v1
kindstringNoServer, Cluster, or Platform
spec.portint9123NoHTTP server port
spec.storage.driverstringsqliteNosqlite or postgres
spec.storage.urlstringYesPostgreSQL connection string
spec.storage.pathstringironflow.dbNoSQLite file path
spec.storage.pool.maxConnsint25NoMax PostgreSQL connections
spec.storage.pool.minConnsint5NoMin PostgreSQL connections
spec.storage.pool.maxIdleTimeduration30sNoIdle connection timeout
spec.nats.embeddedbooltrueNoUse embedded NATS (auto-set to false when url is set)
spec.nats.urlstringYesExternal NATS URL
spec.nats.credentialsstringYesPath to .creds file
spec.nats.storeDirstringNoJetStream storage directory
spec.nats.maxMemorystring256MBNoJetStream memory limit
spec.nats.fileStorageboolfalseNoCreate file-backed JetStream streams
spec.nats.streamReplicasint1NoJetStream stream replica count
spec.auth.devModeboolfalseNoBypass all authentication
spec.auth.masterKeystringYesAES-256 key for secrets
spec.auth.jwtSecretstringYesJWT signing secret — auto-generated if empty
spec.observability.tracing.endpointstringNoOTLP gRPC endpoint
spec.observability.tracing.sampleRatefloat1.0NoTrace sampling rate (0.0–1.0)
spec.observability.tracing.serviceNamestringironflowNoOTel service name
spec.observability.tracing.insecurebooltrueNoPlaintext gRPC for OTLP
spec.observability.metrics.enabledboolfalseNoEnable Prometheus /metrics
spec.engine.pushTimeoutstring10sNoPush mode HTTP timeout
spec.engine.schedulerIntervalstring1sNoScheduler poll interval
spec.engine.staleRunningTimeoutstring5mNoStale running run timeout
spec.engine.retry.maxAttemptsint3NoDefault retry attempts
spec.engine.retry.initialDelaystring1sNoFirst retry delay
spec.engine.retry.maxDelaystring5mNoMaximum retry delay
spec.engine.retry.backofffloat2.0NoBackoff multiplier
spec.cluster.nodeIdstringYesStable node identifier
spec.cluster.staleClaimThresholdstring2mNoDead node claim recovery
spec.cluster.staleClaimRecoveryIntervalstring60sNoHow often scheduler scans for orphaned claims
spec.platform.organizations[].namestringNoOrganization name
spec.platform.organizations[].projects[].namestringNoProject name
spec.platform.organizations[].projects[].environments[].namestringNoEnvironment name