Deployment Overview
Ironflow ships as a single binary that embeds NATS JetStream and can use SQLite or PostgreSQL. Every deployment option runs the same code. Deployment templates simplify the configuration by packaging recommended settings for different scales.
Deployment Templates
Section titled “Deployment Templates”Pick a template that matches your needs:
| Small | Medium | Large | Multi-Tenant | |
|---|---|---|---|---|
| Replicas | 1 | 3 | 2-10 (HPA) | 1 per tenant |
| NATS | Bundled (1 node) | Bundled (3-node cluster) | External | Bundled (per tenant) |
| PostgreSQL | Bundled (CloudNativePG) | Bundled (CloudNativePG) | External | Bundled (per tenant) |
| Cluster mode | No | Yes | Yes | No |
| Monitoring | ServiceMonitor | ServiceMonitor | ServiceMonitor | ServiceMonitor |
| NetworkPolicy | No | No | No | Yes (default-deny) |
| ResourceQuota | No | No | No | Yes |
| PDB | No | No | Yes | No |
| Use case | Evaluation, dev, staging | Production, HA | High-traffic | Shared cluster, tenant isolation |
| Cost estimate | ~$5/mo | ~$25/mo | ~$50+/mo | ~$5-10/mo per tenant |
Quick Start
Section titled “Quick Start”Options A and B deploy to an existing Kubernetes cluster — whichever cluster your kubectl context is pointing at. Options C and D provision the cluster for you (Hetzner Cloud or local Docker).
Prerequisites (Options A & B):
- A running Kubernetes cluster (any provider: Hetzner, AWS EKS, GKE, AKS, Docker Desktop, minikube, k3s, etc.)
- Need a cluster? See Provisioning Infrastructure to create one on Hetzner Cloud, or use Option C below.
- Helm 3+
- kubectl configured for your cluster
- CloudNativePG operator installed (Small and Medium templates bundle PostgreSQL via CloudNativePG)
For the Large template, you also need external PostgreSQL and NATS provisioned separately.
Option A: CLI (recommended)
Section titled “Option A: CLI (recommended)”The ironflow deploy command wraps Helm — it selects the right values file for your template and runs helm install against your current kubectl context.
# Deploy with a templateironflow deploy --template medium --name my-release
# Upgradeironflow deploy upgrade --template medium --name my-release
# Check statusironflow deploy status --name my-releaseOption B: Direct Helm
Section titled “Option B: Direct Helm”helm install my-release ./deploy/helm/ironflow \ -f deploy/helm/ironflow/values-medium.yamlOption C: Full stack from scratch (Hetzner)
Section titled “Option C: Full stack from scratch (Hetzner)”No existing cluster needed. This provisions a Hetzner Cloud Kubernetes cluster. Use ironflow deploy afterwards to install Ironflow.
# Provision infrastructureironflow provision create --provider hetzner --template medium --name ironflowOption D: Local Kubernetes cluster (k3d)
Section titled “Option D: Local Kubernetes cluster (k3d)”No cloud account needed. Creates a local Kubernetes cluster in Docker Desktop with the same small/medium/large sizing as Hetzner.
# Create a local clusterironflow provision create --provider k3d --template small --name dev
# Build and import your local imagedocker build -t ironflow:local .k3d image import ironflow:local -c dev
# Deployironflow deploy --template small --name dev \ --set image.repository=ironflow --set image.tag=local --set image.pullPolicy=NeverRequires k3d and Docker. See Helm Chart Development for the full local development workflow.
Local Development (No Deployment Needed)
Section titled “Local Development (No Deployment Needed)”For local development, Ironflow runs with zero infrastructure:
ironflow serve --devThis starts Ironflow at http://localhost:9123 with an embedded NATS JetStream broker and SQLite. Everything is self-contained in one process. See the Getting Started tutorial.
Template Details
Section titled “Template Details”Best for evaluation, development, staging, and small teams. A single Ironflow replica with bundled NATS and PostgreSQL (via CloudNativePG operator). No cluster mode, no monitoring.
ironflow deploy --template small --name dev# orhelm install dev ./deploy/helm/ironflow -f deploy/helm/ironflow/values-small.yamlPrerequisites: Kubernetes cluster with CloudNativePG operator installed.
Medium
Section titled “Medium”Best for production workloads needing high availability. Three Ironflow replicas with a bundled 3-node NATS JetStream cluster and CloudNativePG PostgreSQL. Cluster mode enabled for distributed scheduling. ServiceMonitor for Prometheus scraping.
ironflow deploy --template medium --name prod# orhelm install prod ./deploy/helm/ironflow -f deploy/helm/ironflow/values-medium.yamlPrerequisites: Kubernetes cluster with CloudNativePG operator installed.
Best for high-traffic production deployments. Horizontal pod autoscaling (2-10 replicas), external NATS and PostgreSQL (you manage them), full monitoring stack, network policies, and pod disruption budget.
ironflow deploy --template large --name prod \ --set externalDatabase.url=postgres://user:pass@host:5432/ironflow \ --set externalNats.url=nats://nats-1:4222,nats://nats-2:4222# orhelm install prod ./deploy/helm/ironflow \ -f deploy/helm/ironflow/values-large.yaml \ --set externalDatabase.url=postgres://... \ --set externalNats.url=nats://...Prerequisites: External PostgreSQL database and NATS JetStream cluster, provisioned separately.
Multi-Tenant
Section titled “Multi-Tenant”Deploy multiple tenants on a shared Kubernetes cluster with namespace-level isolation. Each tenant gets its own Ironflow, NATS, and PostgreSQL in a dedicated namespace, with network policies blocking cross-tenant traffic and resource quotas preventing resource starvation.
helm install acme ./deploy/helm/ironflow \ -n tenant-acme --create-namespace \ -f deploy/helm/ironflow/values-multi-tenant.yaml \ --set ingress.host=acme.ironflow.example.com \ --set ironflow.masterKey=$(openssl rand -hex 32)Prerequisites: Kubernetes cluster with CloudNativePG operator installed. See Multi-Tenant Deployment for the full setup guide.
Within Your Deployment
Section titled “Within Your Deployment”Each Ironflow deployment supports the full resource hierarchy:
Organization → Project → Environment → Resources- Organization is the top-level tenant
- Projects group related environments (e.g., a microservice or team boundary)
- Environments within a project (e.g., dev, staging, production) provide data isolation
API keys are scoped to a specific environment. NATS subjects include both project and environment names for isolation. See Platform Architecture for details.
Detailed Reference
Section titled “Detailed Reference”For deeper information about each deployment method:
- Self Hosting — Minimal single-process Docker setup
- Docker Compose — Multi-node with PostgreSQL and NATS clustering
- Hetzner Cloud — Infrastructure setup, Object Storage, and cluster provisioning
- Kubernetes — Secrets, Helm templates, CLI deploy, and monitoring
- Helm Chart Development — Chart development, testing, and OCI publishing
- kubectl Operations — Day-2 operational procedures
Common Configuration
Section titled “Common Configuration”Regardless of template, Ironflow uses the same configuration:
| Setting | Env Var | Purpose |
|---|---|---|
| Database URL | IRONFLOW_DATABASE_URL | PostgreSQL connection string (omit for embedded SQLite) |
| Master key | IRONFLOW_MASTER_KEY | AES-256 encryption key for secrets at rest |
| NATS URL | NATS_URL | External NATS connection (omit for embedded) |
| Log level | LOG_LEVEL | debug, info, warn, error |
| Metrics | IRONFLOW_METRICS_ENABLED | Enable Prometheus /metrics endpoint |
| Tracing | IRONFLOW_OTEL_ENDPOINT | OpenTelemetry collector URL |
See the Configuration Reference for the full list.