Skip to content

Kubernetes Deployment

Deploy Ironflow on any Kubernetes cluster. Choose a deployment template (Small, Medium, or Large) and deploy with the CLI or directly with Helm.

Prerequisites

For the Large template, you also need external PostgreSQL and NATS provisioned separately.

1. Get a Kubernetes Cluster

If you already have a Kubernetes cluster, skip to Deploy Ironflow.

Provision on Hetzner Cloud

Ironflow can provision a Kubernetes cluster on Hetzner Cloud using Terraform.

One command creates the Kubernetes cluster on Hetzner Cloud:

Terminal window
ironflow provision create --provider hetzner --template medium --name ironflow

This creates a Talos Linux cluster on Hetzner Cloud. The node count and server types depend on the template — see Hetzner Cloud > Node Sizing for details. To customize further, edit deploy/terraform/hetzner/terraform.tfvars.

Check status

Terminal window
ironflow provision status --provider hetzner --name ironflow

Tear down

Terminal window
ironflow provision destroy --provider hetzner --name ironflow

For a detailed step-by-step guide including Terraform variable customization, node sizing, and the full bootstrap process, see Hetzner Cloud Deployment.

Other Providers

You can use any Kubernetes provider — AWS EKS, GKE, AKS, k3s, minikube, etc. For local development with Docker, use the k3d tab above. Once kubectl is configured and pointing at your cluster, proceed to the next step.

2. Set kubectl Context

Ensure kubectl is pointing at the cluster you want to deploy to:

Terminal window
# Check current context
kubectl config current-context
# If you provisioned via the CLI, use the durable kubeconfig copy
export KUBECONFIG=~/.kube/clusters/hetzner-ironflow.yaml
# Or pass --kubeconfig to deploy commands directly
ironflow deploy --kubeconfig ~/.kube/clusters/hetzner-ironflow.yaml --template medium --name prod

Provision --name vs Deploy --name

These two --name flags refer to different things:

  • ironflow provision --name myCluster sets the cluster name — all cloud resources (servers, networks, firewalls) are named after it. The kubeconfig is saved to ~/.kube/clusters/<provider>-myCluster.yaml.
  • ironflow deploy --name myRelease sets the Helm release name — an application install within whatever cluster your kubectl is targeting.

Deploy commands default to your current kubectl context when --kubeconfig is not provided. If you manage multiple clusters, always pass --kubeconfig to avoid accidentally deploying to the wrong cluster:

Terminal window
# Safe: explicitly target the right cluster
ironflow deploy --template medium --name prod \
--kubeconfig ~/.kube/clusters/hetzner-prod.yaml

The workspace-local copy at deploy/terraform/hetzner/kubeconfig is gitignored and won’t exist in new workspaces or git worktrees. Always use the durable copy at ~/.kube/clusters/. See kubectl Operations > Kubeconfig Setup for more options.

3. Configure Image Pull Secret

The Ironflow container image is hosted on GitHub Container Registry (GHCR). All deployment templates (values-small.yaml, values-medium.yaml, values-large.yaml) reference a pull secret named ghcr-pull-secret by default. Create this secret in your cluster before deploying:

Terminal window
export GITHUB_USERNAME=<your-github-username>
export GITHUB_PAT=<your-github-pat>
kubectl create secret docker-registry ghcr-pull-secret \
--docker-server=ghcr.io \
--docker-username=$GITHUB_USERNAME \
--docker-password=$GITHUB_PAT

The GITHUB_PAT requires a GitHub Personal Access Token with the read:packages scope.

The secret name ghcr-pull-secret is configured in each template’s values file under imagePullSecrets. Once the secret exists in the namespace, all deploy and upgrade commands will use it automatically — no need to pass --set on every command.

If using a different secret name, override it with --set 'imagePullSecrets[0].name=your-secret-name' or edit the values file directly.

4. Create Backup Credentials Secret

Small and Medium templates include daily PostgreSQL backups to an S3-compatible object store via the Barman Cloud Plugin. Create a Kubernetes Secret with your S3 credentials before deploying:

Terminal window
kubectl create namespace ironflow 2>/dev/null || true
kubectl create secret generic ironflow-s3-creds -n ironflow \
--from-literal=ACCESS_KEY_ID="your-s3-access-key" \
--from-literal=SECRET_ACCESS_KEY="your-s3-secret-key"

The default values files point to Hetzner Object Storage (fsn1.your-objectstorage.com). The S3 destination path is auto-derived from the Helm release name (s3://ironflow-backups/<release-name>), so each deployment gets an isolated backup path automatically. If you use a different S3 provider or bucket, override postgresql.objectStore.endpointURL and optionally postgresql.objectStore.destinationPath via --set.

The secret name ironflow-s3-creds and key names ACCESS_KEY_ID / SECRET_ACCESS_KEY are the convention used by the default values files. If you use different names, update the postgresql.objectStore.s3Credentials section in your values file to match.

5. Deploy Ironflow

Deployment templates (Small, Medium, Large) control application sizing — replicas, resource requests, NATS topology, and monitoring. They do not change the Kubernetes cluster itself. The cluster size (node count and server types) is configured separately in deploy/terraform/hetzner/terraform.tfvars. See Hetzner Cloud > Node Sizing for recommended cluster sizes per template.

The ironflow deploy command wraps Helm for a streamlined experience.

Install

Terminal window
# Small — 1 replica, bundled NATS + PostgreSQL
ironflow deploy --template small --name dev
# Medium — 3 replicas, NATS cluster, HA
ironflow deploy --template medium --name staging
# Large — HPA (2-10 replicas), external NATS + PostgreSQL
ironflow deploy --template large --name prod \
--set externalDatabase.url=postgres://user:pass@host:5432/ironflow \
--set externalNats.url=nats://nats-1:4222,nats://nats-2:4222
# With Hetzner load balancer — installs Traefik + Hetzner LB optimizations
ironflow deploy --template medium --name prod --hetzner-location fsn1

Upgrade

Use upgrade to apply changes within the same template — for example, picking up a new Ironflow version, adjusting resource limits, or changing replica count:

Terminal window
ironflow deploy upgrade --template medium --name dev

Changing templates requires delete + redeploy

You cannot upgrade across templates that change the NATS topology (e.g., Small → Medium changes NATS from 1 node to a 3-node cluster). Kubernetes does not allow in-place StatefulSet spec changes.

To switch templates, delete and redeploy:

Terminal window
ironflow deploy delete --name dev
ironflow deploy --template medium --name dev

This destroys all bundled data (NATS streams, bundled PostgreSQL). Back up any data before switching. If using external databases, your data is safe. The ghcr-pull-secret and other manually created secrets are not deleted — no need to recreate them.

Status

Terminal window
ironflow deploy status --name dev

Delete

Terminal window
ironflow deploy delete --name dev

Custom chart path

If the Helm chart is not auto-detected (e.g., binary is installed globally), specify it:

Terminal window
ironflow deploy --template medium --name staging --chart /path/to/deploy/helm/ironflow

6. Access the Dashboard

By default, Ironflow is only accessible inside the cluster via ClusterIP. To access the API and dashboard from outside the cluster, you have two options:

Dashboard Credentials

On first boot, Ironflow generates an admin API key and dashboard user, then prints the full credentials to the server logs. Save them immediately — the plaintext API key and password are only shown on first boot.

On subsequent restarts, the server still prints the credentials section but shows the key prefix and a reminder instead of the full plaintext values.

Retrieve the credentials from the pod logs:

Terminal window
kubectl logs -n ironflow $(kubectl get pods -n ironflow -l app.kubernetes.io/component=server -o name | head -1) | grep -A8 "Admin API Key"

First boot output:

Admin API Key:
ifkey_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Dashboard Admin:
Email: admin@ironflow.local
Password: <random-generated-password>
(save these credentials — they will not be shown again)

Subsequent boots output:

Admin API Key:
ifkey_xxxx... (created on first boot)
Dashboard Admin:
Email: admin@ironflow.local
Password: (shown on first boot only — check initial pod logs)

Kubeconfig required

If kubectl returns connection refused errors pointing at localhost:8080, your kubeconfig is not set. If you provisioned via the Ironflow CLI, the kubeconfig is saved to ~/.kube/clusters/:

Terminal window
export KUBECONFIG=~/.kube/clusters/hetzner-ironflow.yaml

See kubectl Operations > Kubeconfig Setup for details.

Option A: Port-forward (development / quick access)

No DNS or ingress needed. Forward the Ironflow service port to your local machine:

Terminal window
kubectl port-forward svc/dev-ironflow -n ironflow 9123:9123
# Open http://localhost:9123

This is fine for development and testing but not suitable for production.

Option B: Ingress + DNS (production)

Expose Ironflow via HTTPS with a domain name. This requires:

  • A domain name (e.g., ironflow.example.com)
  • An ingress controller installed in the cluster (Traefik is automatically installed as a prerequisite when using --hetzner-location with the ironflow deploy CLI)
  • cert-manager for TLS certificates (also installed automatically by the Terraform module)

Step 1: Get the Load Balancer IP

Terminal window
kubectl get svc -n traefik traefik -o jsonpath='{.status.loadBalancer.ingress[0].ip}'

If you deployed with --hetzner-location, a Hetzner Load Balancer is created automatically. You can also find the IP in the Hetzner Cloud Console under Load Balancers. See Hetzner Cloud > External Access via Load Balancer for details.

Step 2: Create a DNS A record

At your DNS provider, create an A record:

TypeNameValue
Aironflow.example.com<load-balancer-ip>

Wait for DNS propagation (usually a few minutes).

Step 3: Enable ingress

Terminal window
ironflow deploy upgrade --template medium --name dev \
--set ingress.enabled=true \
--set ingress.host=ironflow.example.com

Or with Helm directly:

Terminal window
helm upgrade dev ./deploy/helm/ironflow \
-f deploy/helm/ironflow/values-medium.yaml \
--set ingress.enabled=true \
--set ingress.host=ironflow.example.com

cert-manager will automatically provision a TLS certificate from Let’s Encrypt. This may take 1-2 minutes. Check certificate status:

Terminal window
kubectl get certificate

Once the certificate is ready, Ironflow is available at https://ironflow.example.com.

Step 4: Verify

Terminal window
curl https://ironflow.example.com/health

What’s exposed and what’s not

Only the Ironflow API and dashboard (port 9123) are exposed via ingress. NATS (4222) and PostgreSQL (5432) remain internal to the cluster. All Ironflow functionality — events, workflows, pub/sub, SDK connections — goes through the API.

Ingress configuration reference

SettingDefaultDescription
ingress.enabledfalseEnable Kubernetes Ingress
ingress.host""Domain name (required when enabled)
ingress.classNametraefikIngress controller class (set by medium/large templates; empty in base values)
ingress.tlstrueEnable TLS via cert-manager
ingress.clusterIssuerletsencrypt-prod-<release>cert-manager ClusterIssuer (release-name-suffixed by default)
ingress.annotations{}Additional Ingress annotations

Why standard Kubernetes Ingress?

Ironflow uses the standard networking.k8s.io/v1 Ingress API rather than controller-specific CRDs (e.g., Traefik IngressRoute). This works with any ingress controller, though Traefik is installed by default when using --hetzner-location. For the simple routing Ironflow needs (single host, TLS, one backend), standard Ingress is the most portable option. If you need advanced routing features like traffic splitting or header-based matching, consider migrating to the Kubernetes Gateway API which is GA since Kubernetes 1.27.

Cluster Architecture (Hetzner)

┌───────────────────────────────────────────────────────────┐
│ Hetzner Cloud │
│ │
│ Control Plane (3x cpx22) Workers (2x cpx22) │
│ ┌─────────┐ ┌─────────┐ ┌─────────────────┐ │
│ │control-1│ │control-2│ │ Ironflow (x2) │ │
│ └─────────┘ └─────────┘ │ NATS JS (x3) │ │
│ ┌─────────┐ │ PostgreSQL (x2) │ │
│ │control-3│ │ PgBouncer │ │
│ └─────────┘ └─────────────────┘ │
│ │
│ Cilium CNI · Hcloud CCM+CSI · cert-manager │
└───────────────────────────────────────────────────────────┘

Cost: ~€52-81/month depending on node sizes. Tear down instantly to stop billing. See Hetzner Cloud > Node Sizing for per-template cost estimates.

ComponentSpecification
Control plane3x cpx22 (3 vCPU, 4 GB RAM)
Workers2x cpx22 (3 vCPU, 4 GB RAM)
OSTalos Linux (immutable, no SSH)
CNICilium
StorageHetzner Cloud Volumes (CSI)
Certificatescert-manager with Let’s Encrypt

Multi-Tenant Deployment

Deploy multiple tenants on a single Kubernetes cluster with full isolation. Each tenant gets its own namespace with dedicated Ironflow, NATS, and PostgreSQL instances.

How it works

  • One namespace per tenant — each helm install targets a separate namespace
  • Network isolationnetworkPolicy.defaultDeny: true blocks all cross-namespace traffic; only the ingress controller and monitoring namespaces can reach tenant pods
  • Resource limitsresourceQuota caps each tenant’s CPU, memory, storage, and pod count
  • Data isolation — each tenant has its own NATS streams and PostgreSQL database
  • Per-tenant ingress — separate hostname per tenant (e.g., acme.ironflow.example.com)

Deploy a tenant

Terminal window
# 1. Create namespace + secrets
kubectl create ns tenant-acme
kubectl -n tenant-acme create secret docker-registry ghcr-pull-secret \
--docker-server=ghcr.io --docker-username=$GITHUB_USERNAME --docker-password=$GITHUB_PAT
kubectl -n tenant-acme create secret generic ironflow-s3-creds \
--from-literal=ACCESS_KEY_ID="..." --from-literal=SECRET_ACCESS_KEY="..."
# 2. Install CNPG operator (once per cluster, not per tenant)
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait
# 3. Deploy
helm install acme ./deploy/helm/ironflow \
-n tenant-acme \
-f deploy/helm/ironflow/values-multi-tenant.yaml \
--set ingress.host=acme.ironflow.example.com \
--set ironflow.masterKey=$(openssl rand -hex 32)

Repeat for each tenant with a different namespace and hostname. The S3 backup destination path is auto-derived from the Helm release name (s3://ironflow-backups/acme), so each tenant’s backups are automatically isolated. Weekly backup restore verification runs by default (see backupVerification in the multi-tenant values).

For disaster recovery (restoring a tenant from backup), see Restore a single tenant from backup.

What each tenant gets

ComponentIsolation
Ironflow serverDedicated pod(s) in tenant namespace
NATS JetStreamDedicated StatefulSet in tenant namespace
PostgreSQLDedicated CNPG Cluster in tenant namespace
NetworkDefault-deny NetworkPolicy, intra-namespace only
ResourcesResourceQuota (CPU, memory, storage, pods)
IngressSeparate hostname + TLS certificate

Customize per tenant

The values-multi-tenant.yaml template is a starting point. Override any value per tenant:

Terminal window
# Give a high-value tenant more resources
helm install premium ./deploy/helm/ironflow \
-n tenant-premium --create-namespace \
-f deploy/helm/ironflow/values-multi-tenant.yaml \
--set ingress.host=premium.ironflow.example.com \
--set ironflow.masterKey=$(openssl rand -hex 32) \
--set resourceQuota.cpu.limits=8 \
--set resourceQuota.memory.limits=16Gi \
--set postgresql.persistence.size=20Gi

Helm Chart Reference

For the full chart structure, configuration toggles, environment examples, and design decisions, see the Helm Chart Development guide.

Policies & Authz

For org/role/CEL policy authoring + auth audit, see Policies explanation and the policies how-to guides.

Operational Guides