YAML Configuration
Ironflow can be configured with a declarative YAML file instead of CLI flags and environment variables. One file describes your entire server infrastructure — from a local dev setup to a multi-tenant platform.
Why YAML?
Section titled “Why YAML?”Without YAML, configuring Ironflow means combining CLI flags, environment variables, and docker-compose files. The configuration is scattered and hard to reproduce. With ironflow.yaml, your entire infrastructure is a single, version-controlled file.
| Without YAML | With YAML |
|---|---|
--port 9123 --nats-url nats://h1:4222 --node-id node-1 + IRONFLOW_DATABASE_URL=... IRONFLOW_MASTER_KEY=... | ironflow serve -f ironflow.yaml |
| Configuration scattered across flags, env vars, compose files | Single file, checked into git |
| Easy to forget a flag when deploying | Reproducible on every boot |
Generate a Starter Template
Section titled “Generate a Starter Template”Use ironflow config init to generate a starter YAML for your deployment tier:
# Local development (minimal, SQLite, dev mode)ironflow config init > ironflow.yaml
# Production single-node (PostgreSQL, observability)ironflow config init --prod > ironflow.yaml
# Multi-node clusterironflow config init --kind cluster > ironflow.yaml
# Multi-tenant platformironflow config init --kind platform > ironflow.yamlThree Kinds
Section titled “Three Kinds”Every ironflow.yaml has a kind that determines which features and validations apply:
| Kind | Use Case | Requirements |
|---|---|---|
Server | Local dev, single-node production | None (SQLite + embedded NATS by default) |
Cluster | Multi-node horizontal scaling | PostgreSQL + external NATS + stable nodeId |
Platform | Multi-tenant SaaS | Everything in Cluster + organizations |
The kind you choose determines what fields are required. Start with Server and graduate to Cluster or Platform as your deployment grows.
Server: Local Development
Section titled “Server: Local Development”The simplest configuration — 5 lines to start:
apiVersion: ironflow/v1kind: Serverspec: port: 9123 auth: devMode: trueironflow serve -f ironflow.yamlThis boots Ironflow with embedded NATS (memory mode), SQLite, and authentication disabled. Similar to ironflow serve --dev, though the --dev flag also tightens crash-resume timings for faster local iteration.
Server: Single-Node Production
Section titled “Server: Single-Node Production”Add PostgreSQL, secrets encryption, and observability:
apiVersion: ironflow/v1kind: Serverspec: port: 9123
storage: driver: postgres url: ${IRONFLOW_DATABASE_URL} pool: maxConns: 25 minConns: 5 maxIdleTime: 30s
nats: storeDir: /data/nats
auth: masterKey: ${IRONFLOW_MASTER_KEY} jwtSecret: ${IRONFLOW_JWT_SECRET} # optional — auto-generated if empty
observability: tracing: endpoint: otel-collector:4317 sampleRate: 1.0 metrics: enabled: trueNotice the ${IRONFLOW_DATABASE_URL} syntax — secrets are never stored in the file. See Environment Variable References below.
Cluster: Multi-Node
Section titled “Cluster: Multi-Node”Scale horizontally with shared PostgreSQL and external NATS. Deploy the same file on every node — only nodeId differs (set via env var per node):
apiVersion: ironflow/v1kind: Clusterspec: port: 9123
storage: driver: postgres url: ${IRONFLOW_DATABASE_URL} pool: maxConns: 25 minConns: 5
nats: url: ${NATS_URL} # credentials: /path/to/nats.creds
cluster: nodeId: ${IRONFLOW_NODE_ID} staleClaimThreshold: 2m
auth: masterKey: ${IRONFLOW_MASTER_KEY} jwtSecret: ${IRONFLOW_JWT_SECRET} # optional — auto-generated if empty
observability: tracing: endpoint: otel-collector:4317 sampleRate: 1.0 metrics: enabled: trueCluster kind enforces three rules at startup:
- PostgreSQL required — SQLite does not support
SKIP LOCKEDfor distributed scheduling - External NATS required — embedded NATS is single-process only
- Stable
nodeIdrequired — used for claim ownership and log correlation
See Docker Compose Deployment for the full clustering guide.
Platform: Multi-Tenant
Section titled “Platform: Multi-Tenant”Declare organizations, projects, and environments. Ironflow creates them on first boot and is idempotent on subsequent boots:
apiVersion: ironflow/v1kind: Platformspec: port: 9123
storage: driver: postgres url: ${IRONFLOW_DATABASE_URL}
nats: url: ${NATS_URL}
cluster: nodeId: ${IRONFLOW_NODE_ID}
auth: masterKey: ${IRONFLOW_MASTER_KEY} jwtSecret: ${IRONFLOW_JWT_SECRET} # optional — auto-generated if empty
platform: organizations: - name: acme-corp projects: - name: payments environments: - name: development - name: staging - name: production - name: orders environments: - name: development - name: production - name: globex projects: - name: logistics environments: - name: productionPlatform inherits all Cluster requirements and adds:
- At least one organization — the platform must have tenants
- Unique names — duplicate org, project, or environment names within a parent are rejected
The default organization (
org_default) and its project/environment are always created first, regardless of what’s in the YAML. Your Platform organizations are created on top.
Environment Variable References
Section titled “Environment Variable References”The YAML file is safe to commit to git because secrets use ${VAR} references that resolve at boot time:
| Syntax | Behavior |
|---|---|
${VAR} | Required — startup fails with a clear error if not set |
${VAR:-default} | Optional — uses the default value if the env var is not set |
postgres://literal | Literal string — used as-is (no ${} means no resolution) |
Example:
storage: url: ${IRONFLOW_DATABASE_URL} # required — error if unset
nats: maxMemory: ${NATS_MAX_MEM:-256MB} # optional — defaults to 256MB
auth: masterKey: ${IRONFLOW_MASTER_KEY} # required — error if unsetIf an env var is not set and has no default, the error message tells you exactly which field and variable:
config file error: environment variable ${IRONFLOW_DATABASE_URL} is not setValidate Without Booting
Section titled “Validate Without Booting”Check your YAML for errors without starting the server:
ironflow validate -f ironflow.yamlThis parses the file, resolves ${VAR} references, runs kind-specific validation, and prints a summary:
Validating ironflow.yaml...
Kind: Cluster Storage: postgres (postgres://localhost/iron...) NATS: external (nats://h1:4222,h2:4222) Auth: enabled (master key set) Tracing: otel-collector:4317 Metrics: enabled (/metrics) Node ID: node-1
✓ Valid. Ready to boot with: ironflow serve -f ironflow.yamlExit code 0 means valid, exit code 1 means invalid.
CLI Flag Overrides
Section titled “CLI Flag Overrides”CLI flags always take precedence over YAML values. This lets you use a shared YAML file but override specific settings per-node or per-environment:
# YAML says port 9123, but override to 8080 for this nodeironflow serve -f ironflow.yaml --port 8080
# YAML says dev mode is off, but enable it locallyironflow serve -f ironflow.yaml --devOnly flags you explicitly pass override the YAML. Unset flags preserve the YAML values.
Backward Compatibility
Section titled “Backward Compatibility”If you don’t use -f, nothing changes. All existing flags and environment variables work exactly as before:
# These still work — no YAML neededironflow serveironflow serve --port 9000ironflow serve --nats-url nats://h1:4222 --node-id node-1IRONFLOW_DATABASE_URL="postgres://..." ironflow serveThe -f flag is opt-in. You can adopt YAML incrementally.
Full Field Reference
Section titled “Full Field Reference”| Field | Type | Default | Supports ${VAR} | Notes |
|---|---|---|---|---|
apiVersion | string | — | No | Must be ironflow/v1 |
kind | string | — | No | Server, Cluster, or Platform |
spec.port | int | 9123 | No | HTTP server port |
spec.storage.driver | string | sqlite | No | sqlite or postgres |
spec.storage.url | string | — | Yes | PostgreSQL connection string |
spec.storage.path | string | ironflow.db | No | SQLite file path |
spec.storage.pool.maxConns | int | 25 | No | Max PostgreSQL connections |
spec.storage.pool.minConns | int | 5 | No | Min PostgreSQL connections |
spec.storage.pool.maxIdleTime | duration | 30s | No | Idle connection timeout |
spec.nats.embedded | bool | true | No | Use embedded NATS (auto-set to false when url is set) |
spec.nats.url | string | — | Yes | External NATS URL |
spec.nats.credentials | string | — | Yes | Path to .creds file |
spec.nats.storeDir | string | — | No | JetStream storage directory |
spec.nats.maxMemory | string | 256MB | No | JetStream memory limit |
spec.nats.fileStorage | bool | false | No | Create file-backed JetStream streams |
spec.nats.streamReplicas | int | 1 | No | JetStream stream replica count |
spec.auth.devMode | bool | false | No | Bypass all authentication |
spec.auth.masterKey | string | — | Yes | AES-256 key for secrets |
spec.auth.jwtSecret | string | — | Yes | JWT signing secret — auto-generated if empty |
spec.observability.tracing.endpoint | string | — | No | OTLP gRPC endpoint |
spec.observability.tracing.sampleRate | float | 1.0 | No | Trace sampling rate (0.0–1.0) |
spec.observability.tracing.serviceName | string | ironflow | No | OTel service name |
spec.observability.tracing.insecure | bool | true | No | Plaintext gRPC for OTLP |
spec.observability.metrics.enabled | bool | false | No | Enable Prometheus /metrics |
spec.engine.pushTimeout | string | 10s | No | Push mode HTTP timeout |
spec.engine.schedulerInterval | string | 1s | No | Scheduler poll interval |
spec.engine.staleRunningTimeout | string | 5m | No | Stale running run timeout |
spec.engine.retry.maxAttempts | int | 3 | No | Default retry attempts |
spec.engine.retry.initialDelay | string | 1s | No | First retry delay |
spec.engine.retry.maxDelay | string | 5m | No | Maximum retry delay |
spec.engine.retry.backoff | float | 2.0 | No | Backoff multiplier |
spec.cluster.nodeId | string | — | Yes | Stable node identifier |
spec.cluster.staleClaimThreshold | string | 2m | No | Dead node claim recovery |
spec.cluster.staleClaimRecoveryInterval | string | 60s | No | How often scheduler scans for orphaned claims |
spec.platform.organizations[].name | string | — | No | Organization name |
spec.platform.organizations[].projects[].name | string | — | No | Project name |
spec.platform.organizations[].projects[].environments[].name | string | — | No | Environment name |
What’s Next?
Section titled “What’s Next?”- Docker Compose Deployment — detailed clustering guide with Docker Compose examples
- Observability — tracing and metrics configuration
- API Keys — authentication setup
- Secrets Management — encrypting secrets at rest