Skip to content

Deployment Overview

Ironflow ships as a single binary that embeds NATS JetStream and can use SQLite or PostgreSQL. Every deployment option runs the same code. Deployment templates simplify the configuration by packaging recommended settings for different scales.

Pick a template that matches your needs:

SmallMediumLargeMulti-Tenant
Replicas132-10 (HPA)1 per tenant
NATSBundled (1 node)Bundled (3-node cluster)ExternalBundled (per tenant)
PostgreSQLBundled (CloudNativePG)Bundled (CloudNativePG)ExternalBundled (per tenant)
Cluster modeNoYesYesNo
MonitoringServiceMonitorServiceMonitorServiceMonitorServiceMonitor
NetworkPolicyNoNoNoYes (default-deny)
ResourceQuotaNoNoNoYes
PDBNoNoYesNo
Use caseEvaluation, dev, stagingProduction, HAHigh-trafficShared cluster, tenant isolation
Cost estimate~$5/mo~$25/mo~$50+/mo~$5-10/mo per tenant

Options A and B deploy to an existing Kubernetes cluster — whichever cluster your kubectl context is pointing at. Options C and D provision the cluster for you (Hetzner Cloud or local Docker).

Prerequisites (Options A & B):

  • A running Kubernetes cluster (any provider: Hetzner, AWS EKS, GKE, AKS, Docker Desktop, minikube, k3s, etc.)
  • Helm 3+
  • kubectl configured for your cluster
  • CloudNativePG operator installed (Small and Medium templates bundle PostgreSQL via CloudNativePG)

For the Large template, you also need external PostgreSQL and NATS provisioned separately.

The ironflow deploy command wraps Helm — it selects the right values file for your template and runs helm install against your current kubectl context.

Terminal window
# Deploy with a template
ironflow deploy --template medium --name my-release
# Upgrade
ironflow deploy upgrade --template medium --name my-release
# Check status
ironflow deploy status --name my-release
Terminal window
helm install my-release ./deploy/helm/ironflow \
-f deploy/helm/ironflow/values-medium.yaml

Option C: Full stack from scratch (Hetzner)

Section titled “Option C: Full stack from scratch (Hetzner)”

No existing cluster needed. This provisions a Hetzner Cloud Kubernetes cluster. Use ironflow deploy afterwards to install Ironflow.

Terminal window
# Provision infrastructure
ironflow provision create --provider hetzner --template medium --name ironflow

No cloud account needed. Creates a local Kubernetes cluster in Docker Desktop with the same small/medium/large sizing as Hetzner.

Terminal window
# Create a local cluster
ironflow provision create --provider k3d --template small --name dev
# Build and import your local image
docker build -t ironflow:local .
k3d image import ironflow:local -c dev
# Deploy
ironflow deploy --template small --name dev \
--set image.repository=ironflow --set image.tag=local --set image.pullPolicy=Never

Requires k3d and Docker. See Helm Chart Development for the full local development workflow.

For local development, Ironflow runs with zero infrastructure:

Terminal window
ironflow serve --dev

This starts Ironflow at http://localhost:9123 with an embedded NATS JetStream broker and SQLite. Everything is self-contained in one process. See the Getting Started tutorial.

Best for evaluation, development, staging, and small teams. A single Ironflow replica with bundled NATS and PostgreSQL (via CloudNativePG operator). No cluster mode, no monitoring.

Terminal window
ironflow deploy --template small --name dev
# or
helm install dev ./deploy/helm/ironflow -f deploy/helm/ironflow/values-small.yaml

Prerequisites: Kubernetes cluster with CloudNativePG operator installed.

Best for production workloads needing high availability. Three Ironflow replicas with a bundled 3-node NATS JetStream cluster and CloudNativePG PostgreSQL. Cluster mode enabled for distributed scheduling. ServiceMonitor for Prometheus scraping.

Terminal window
ironflow deploy --template medium --name prod
# or
helm install prod ./deploy/helm/ironflow -f deploy/helm/ironflow/values-medium.yaml

Prerequisites: Kubernetes cluster with CloudNativePG operator installed.

Best for high-traffic production deployments. Horizontal pod autoscaling (2-10 replicas), external NATS and PostgreSQL (you manage them), full monitoring stack, network policies, and pod disruption budget.

Terminal window
ironflow deploy --template large --name prod \
--set externalDatabase.url=postgres://user:pass@host:5432/ironflow \
--set externalNats.url=nats://nats-1:4222,nats://nats-2:4222
# or
helm install prod ./deploy/helm/ironflow \
-f deploy/helm/ironflow/values-large.yaml \
--set externalDatabase.url=postgres://... \
--set externalNats.url=nats://...

Prerequisites: External PostgreSQL database and NATS JetStream cluster, provisioned separately.

Deploy multiple tenants on a shared Kubernetes cluster with namespace-level isolation. Each tenant gets its own Ironflow, NATS, and PostgreSQL in a dedicated namespace, with network policies blocking cross-tenant traffic and resource quotas preventing resource starvation.

Terminal window
helm install acme ./deploy/helm/ironflow \
-n tenant-acme --create-namespace \
-f deploy/helm/ironflow/values-multi-tenant.yaml \
--set ingress.host=acme.ironflow.example.com \
--set ironflow.masterKey=$(openssl rand -hex 32)

Prerequisites: Kubernetes cluster with CloudNativePG operator installed. See Multi-Tenant Deployment for the full setup guide.

Each Ironflow deployment supports the full resource hierarchy:

Organization → Project → Environment → Resources
  • Organization is the top-level tenant
  • Projects group related environments (e.g., a microservice or team boundary)
  • Environments within a project (e.g., dev, staging, production) provide data isolation

API keys are scoped to a specific environment. NATS subjects include both project and environment names for isolation. See Platform Architecture for details.

For deeper information about each deployment method:

Regardless of template, Ironflow uses the same configuration:

SettingEnv VarPurpose
Database URLIRONFLOW_DATABASE_URLPostgreSQL connection string (omit for embedded SQLite)
Master keyIRONFLOW_MASTER_KEYAES-256 encryption key for secrets at rest
NATS URLNATS_URLExternal NATS connection (omit for embedded)
Log levelLOG_LEVELdebug, info, warn, error
MetricsIRONFLOW_METRICS_ENABLEDEnable Prometheus /metrics endpoint
TracingIRONFLOW_OTEL_ENDPOINTOpenTelemetry collector URL

See the Configuration Reference for the full list.