Helm Chart Development
- The Ironflow Helm chart lives at
deploy/helm/ironflow/. - In dev mode, it deploys Ironflow with bundled NATS (subchart) and CloudNativePG-managed PostgreSQL.
- In production mode, both PostgreSQL and NATS can be either bundled (CloudNativePG for PostgreSQL, 3-node cluster for NATS) or external (you provide connection URLs to your existing services).
Prerequisites
Tools
brew install helm k3d kubectlCluster Setup
Docker must be running for k3d (e.g., Docker Desktop, Colima, or native Docker on Linux). Note that k3d creates its own cluster separate from Docker Desktop’s built-in Kubernetes — the two are independent.
# Creates a minimal cluster: 1 server + 1 agent (equivalent to the "small" template)k3d cluster create ironflow-dev --waitFor larger clusters that simulate HA, use ironflow provision with template-based sizing:
# Small (1 server + 1 agent — same as plain k3d cluster create)ironflow provision create --provider k3d --template small --name ironflow-dev
# Medium (3 servers + 2 agents — simulates HA)ironflow provision create --provider k3d --template medium --name ironflow-devk3d automatically switches your kubectl context to the new cluster. Verify you’re on the right context before continuing:
kubectl config current-context# Expected: k3d-ironflow-devEnsure kubectl is configured and pointing at your cluster (kind, minikube, EKS, GKE, AKS, etc.):
kubectl config current-contextCloudNativePG operator
Required for bundled PostgreSQL (postgresql.bundled: true). Not needed if you provide externalDatabase.url. This installs into whichever cluster your current kubectl context points to.
helm repo add cnpg https://cloudnative-pg.github.io/chartshelm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --waitChart Structure
deploy/helm/ironflow/├── Chart.yaml # Metadata + subchart dependencies (NATS)├── Chart.lock # Locked dependency versions├── values.yaml # Dev defaults (bundled CNPG + NATS, auth disabled)├── values-small.yaml # Small template (1 replica, bundled)├── values-medium.yaml # Medium template (3 replicas, HA cluster)├── values-large.yaml # Large template (HPA, external deps)├── values-multi-tenant.yaml # Per-tenant deployment on shared cluster├── templates/ # Kubernetes manifest templates│ ├── configmap.yaml # Generates ironflow.yaml from Helm values│ ├── deployment.yaml # Ironflow pods│ ├── postgresql.yaml # CloudNativePG Cluster CR (when bundled)│ ├── secret.yaml # Master key, license key (DB URL from CNPG Secret)│ ├── service.yaml # ClusterIP on port 9123 (supports annotations for direct LB override)│ └── ... # Ingress, HPA, NetworkPolicy, ServiceMonitor, etc.└── values-dev.yaml # Local k3d development valuesLocal Development
1. Run template tests (no cluster needed)
Validates that all template combinations render correctly:
make test-helmThis runs template tests covering dev mode (CNPG), production mode, cluster mode, existingSecret, optional templates, and validation errors.
2. Deploy to a local cluster
Build the Docker image, deploy the chart, and verify the health endpoint.
One command automates the full flow — create cluster, build, deploy, test, clean up:
make test-helm-e2eThis automates:
- Creates a
ironflow-testk3d cluster - Installs the CloudNativePG operator
- Generates protobuf code and builds the dashboard (
make proto embed) - Builds the Docker image locally
- Imports the image into k3d
- Installs the Helm chart with bundled NATS + CNPG PostgreSQL
- Waits for pods to be ready
- Port-forwards and hits
/health - Cleans up the cluster on exit
There is no equivalent one-command workflow for an existing Kubernetes cluster. Follow Section 3 (Manual step-by-step) below, selecting the Kubernetes tab in each step.
3. Manual step-by-step
If you want to keep the cluster running for interactive testing. Skip the cluster and CNPG steps if you already completed them in Prerequisites.
# Create cluster (skip if already created in Prerequisites)k3d cluster create ironflow-dev --wait
# Install CloudNativePG operator (skip if already installed in Prerequisites)helm repo add cnpg https://cloudnative-pg.github.io/chartshelm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait
# Generate protobuf code and build dashboard (both gitignored, required by Docker build)make proto embed
# Build and import image (no registry needed)docker build -t ironflow:local .k3d image import ironflow:local -c ironflow-dev
# Install charthelm dependency update deploy/helm/ironflow/helm install ironflow deploy/helm/ironflow/ \ --namespace ironflow --create-namespace \ --set image.repository=ironflow \ --set image.tag=local \ --set image.pullPolicy=Never
# Watch pods come up (CNPG creates PostgreSQL pods, may take 1-2 minutes)kubectl get pods -n ironflow -w
# Port-forward (in a separate terminal)kubectl port-forward svc/ironflow -n ironflow 9123:9123
# Open http://localhost:9123Assumes kubectl is configured and pointing at your cluster (kind, minikube, EKS, GKE, AKS, etc.).
# Install CloudNativePG operator (skip if already installed in Prerequisites)helm repo add cnpg https://cloudnative-pg.github.io/chartshelm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait
# Generate protobuf code and build dashboard (both gitignored, required by Docker build)make proto embed
# Build and push image to a registrydocker build -t ghcr.io/YOUR_ORG/ironflow:dev .docker push ghcr.io/YOUR_ORG/ironflow:devkind and minikube shortcuts
- kind:
kind load docker-image ironflow:local --name YOUR_CLUSTER - minikube: Run
eval $(minikube docker-env)first, thendocker build -t ironflow:local .
When using local images, override the helm install values below with:
--set image.repository=ironflow --set image.tag=local --set image.pullPolicy=Never
# Install chart with your registry imagehelm dependency update deploy/helm/ironflow/helm install ironflow deploy/helm/ironflow/ \ --namespace ironflow --create-namespace \ --set image.repository=ghcr.io/YOUR_ORG/ironflow \ --set image.tag=dev \ --set image.pullPolicy=Always
# If using a private registry, create an image pull secret first:# kubectl create secret docker-registry ghcr-creds \# --docker-server=ghcr.io \# --docker-username=YOUR_USER \# --docker-password=YOUR_TOKEN \# -n ironflow# Then add: --set imagePullSecrets[0].name=ghcr-creds
# Watch pods come up (CNPG creates PostgreSQL pods, may take 1-2 minutes)kubectl get pods -n ironflow -w
# Port-forward (in a separate terminal)kubectl port-forward svc/ironflow -n ironflow 9123:9123
# Open http://localhost:91234. Rebuild and redeploy after code changes
For a streamlined iteration workflow where you develop an external app against Ironflow on k3d, see the k3d Development Workflow guide. It includes a one-command make helm-redeploy target.
After modifying Ironflow source code (Go, React dashboard, protobuf), rebuild the Docker image and deploy it to your running cluster:
# Re-generate protobuf and rebuild dashboard if sources changedmake proto embed
# Rebuild image from your current branchdocker build -t ironflow:local .
# Import into k3d (replaces the previous image)k3d image import ironflow:local -c ironflow-dev
# Restart pods to pick up the new imagekubectl rollout restart deployment/ironflow -n ironflow
# Watch the rolloutkubectl rollout status deployment/ironflow -n ironflowThis works because step 3 sets pullPolicy=Never, so Kubernetes uses whatever image is loaded into k3d’s containerd — no registry needed.
# Re-generate protobuf and rebuild dashboard if sources changedmake proto embed
# Rebuild and push to your registrydocker build -t ghcr.io/YOUR_ORG/ironflow:dev .docker push ghcr.io/YOUR_ORG/ironflow:dev
# Restart pods to pick up the new imagekubectl rollout restart deployment/ironflow -n ironflow
# Watch the rolloutkubectl rollout status deployment/ironflow -n ironflowIf you use a unique tag per build (e.g., ironflow:dev-$(git rev-parse --short HEAD)), Kubernetes will pull the new image automatically without needing rollout restart. Update the release with helm upgrade --set image.tag=dev-ABCDEF ....
5. Iterate on Helm changes
After modifying templates or values:
# Re-render and check templates (no cluster needed)helm template test deploy/helm/ironflow/ --show-only templates/configmap.yamlUpgrade the running release:
helm upgrade ironflow deploy/helm/ironflow/ \ --namespace ironflow \ --set image.repository=ironflow \ --set image.tag=local \ --set image.pullPolicy=Never
# Check pod statuskubectl get pods -n ironflowkubectl logs -l app.kubernetes.io/component=server -n ironflow --tail=20helm upgrade ironflow deploy/helm/ironflow/ \ --namespace ironflow \ --set image.repository=ghcr.io/YOUR_ORG/ironflow \ --set image.tag=dev \ --set image.pullPolicy=Always
# Check pod statuskubectl get pods -n ironflowkubectl logs -l app.kubernetes.io/component=server -n ironflow --tail=206. Clean up
# Delete the Helm releasehelm uninstall ironflow -n ironflow
# Delete the cluster (removes everything)k3d cluster delete ironflow-dev# Delete the Helm release (keeps the cluster running)helm uninstall ironflow -n ironflow
# Optionally remove the CNPG operator if no longer neededhelm uninstall cnpg -n cnpg-systemOn a shared cluster, only uninstall the CNPG operator if no other applications depend on it.
Linting
helm lint deploy/helm/ironflow/helm lint deploy/helm/ironflow/ -f deploy/helm/ironflow/values-large.yamlTesting with Production Values
Test the chart with external dependencies (subcharts disabled):
helm template test deploy/helm/ironflow/ \ -f deploy/helm/ironflow/values-large.yaml \ --show-only templates/configmap.yamlVerify the ConfigMap generates kind: Cluster with external NATS/PostgreSQL URLs.
Publishing
Push the chart to GHCR OCI registry:
make helm-publishThis is also automated in the release workflow (.github/workflows/release.yml) — the chart is published alongside Docker images and npm packages on every tagged release.
Install from OCI Registry
helm install ironflow oci://ghcr.io/sahina/charts/ironflow \ --namespace ironflow --create-namespaceEnvironment Configuration
The chart ships with value files for different environments. Use these as a starting point or pass --set overrides.
Bundled NATS subchart and CloudNativePG-managed PostgreSQL. Requires the CNPG operator (see Prerequisites).
# values.yaml (default) or values-small.yamlironflow: devMode: true # Disables authenticationnats: bundled: true # NATS subchart deployed alongsidepostgresql: bundled: true # CNPG Cluster CR created (1 instance, auto-generated password) instances: 1# Install CNPG operator first (if not already installed)helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace
# Then install Ironflowhelm install ironflow deploy/helm/ironflow/ \ --namespace ironflow --create-namespaceBundled NATS 3-node cluster and PostgreSQL subchart. Multi-replica with cluster coordination.
replicaCount: 3ironflow: devMode: false masterKey: "<openssl rand -hex 32>"cluster: enabled: true staleClaimThreshold: "2m"nats: bundled: true config: cluster: enabled: true replicas: 3postgresql: bundled: truehelm install ironflow deploy/helm/ironflow/ \ --namespace ironflow --create-namespace \ -f deploy/helm/ironflow/values-medium.yamlBoth NATS and PostgreSQL run in-cluster. NATS deploys as a 3-node cluster with auth enabled. PostgreSQL uses CloudNativePG with HA (requires the CNPG operator). No external infrastructure needed.
# values-medium.yaml (bundled NATS cluster variant)replicaCount: 3ironflow: devMode: false masterKey: "<openssl rand -hex 32>"cluster: enabled: truenats: bundled: true config: jetstream: enabled: true fileStore: pvc: size: 10Gi cluster: enabled: true replicas: 3 merge: authorization: token: "<openssl rand -base64 32>"postgresql: bundled: true instances: 2 # HA# Install CNPG operator first (if not already installed)helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace
# Then install Ironflowhelm install ironflow deploy/helm/ironflow/ \ --namespace ironflow --create-namespace \ -f my-production-bundled-values.yamlAll external dependencies — no CNPG operator required. You provide a PostgreSQL connection URL (e.g., from RDS, Cloud SQL, Neon, or any managed PostgreSQL). HPA, PDB, ServiceMonitor, and metrics enabled.
replicaCount: 2ironflow: devMode: false masterKey: "<openssl rand -hex 32>"cluster: enabled: truenats: bundled: falsepostgresql: bundled: falseexternalDatabase: url: "postgres://ironflow:pass@db.example.com:5432/ironflow?sslmode=require"externalNats: url: "nats://nats-1:4222,nats://nats-2:4222"observability: metrics: enabled: trueautoscaling: enabled: true minReplicas: 2 maxReplicas: 10podDisruptionBudget: enabled: true minAvailable: 1serviceMonitor: enabled: truehelm install ironflow deploy/helm/ironflow/ \ --namespace ironflow --create-namespace \ -f deploy/helm/ironflow/values-large.yamlDeploy multiple tenants on a shared Kubernetes cluster, one namespace per tenant. Each tenant gets its own Ironflow server, NATS, and PostgreSQL with full network isolation and resource limits.
networkPolicy: enabled: true defaultDeny: true # Blocks all cross-namespace traffic allowNamespaces: - traefik - monitoringresourceQuota: enabled: true cpu: { requests: "2", limits: "4" } memory: { requests: "4Gi", limits: "8Gi" } storage: "50Gi" pods: "20"nats: bundled: true # Per-tenant NATSpostgresql: bundled: true # Per-tenant PostgreSQLingress: enabled: true host: "" # Set per tenant# Deploy tenant Ahelm install acme ./deploy/helm/ironflow \ -n tenant-acme --create-namespace \ -f deploy/helm/ironflow/values-multi-tenant.yaml \ --set ingress.host=acme.ironflow.example.com \ --set ironflow.masterKey=$(openssl rand -hex 32)
# Deploy tenant Bhelm install globex ./deploy/helm/ironflow \ -n tenant-globex --create-namespace \ -f deploy/helm/ironflow/values-multi-tenant.yaml \ --set ingress.host=globex.ironflow.example.com \ --set ironflow.masterKey=$(openssl rand -hex 32)Observability & Monitoring
When observability.metrics.enabled is set in values, the chart wires IRONFLOW_METRICS_ENABLED and OTel environment variables into the pod. The serviceMonitor.enabled value creates a Prometheus ServiceMonitor with a release label matching the Helm release name (required by kube-prometheus-stack).
The readiness probe uses /ready (checks PostgreSQL + NATS connectivity). The liveness probe uses /health (checks PostgreSQL only). This prevents NATS transient blips from cascading into pod restarts.
The NetworkPolicy template automatically detects bundled vs external dependencies. When NATS or PostgreSQL is bundled, egress rules use podSelector (same-namespace). When external, egress rules use namespaceSelector targeting networkPolicy.natsNamespace or networkPolicy.postgresNamespace. For multi-tenant deployments, enable networkPolicy.defaultDeny: true to add namespace-wide isolation that blocks all cross-namespace traffic except DNS and explicitly allowed namespaces (ingress controller, monitoring).
For the full monitoring stack (Prometheus, Grafana, Alertmanager, BlackBox Exporter), see deploy/monitoring/ and the Observability guide.
Key Design Decisions
- ConfigMap bridge: The chart generates an
ironflow.yamlConfigMap from Helm values. The binary reads it viaironflow serve -f /config/ironflow.yaml. This avoids duplicating config parsing between Helm and Ironflow. - CloudNativePG for PostgreSQL: When
postgresql.bundled=true, the chart creates a CNPGClusterCR. The CNPG operator (installed separately per cluster, like cert-manager) manages PostgreSQL pods, services, and credentials. The operator auto-generates a secure password and stores it in a{release}-postgresql-appSecret with aurikey containing the full connection string. No hardcoded passwords in values. - Conditional subcharts:
nats.bundledcontrols whether NATS deploys as a subchart (dev or production) or Ironflow connects to an external NATS. When bundled,nats.config.cluster.enabledandnats.config.cluster.replicascontrol whether NATS runs as a single instance (dev) or a multi-node cluster (production).nats.config.merge.authorization.tokenconfigures authentication — the token is passed to both the NATS server (via the subchart’s config merge) and the Ironflow client (embedded in the connection URL).postgresql.bundledcontrols whether a CNPG Cluster CR is created (dev/staging/production — requires the CNPG operator) or Ironflow usesexternalDatabase.url(no CNPG operator needed). - NATS is always external in K8s: Even when the NATS subchart is bundled, it runs as a separate pod. The ConfigMap always sets
nats.urlpointing to the subchart’s service — Ironflow never starts embedded NATS in Kubernetes. - Secrets via env vars: Database URL (from CNPG Secret or external Secret), master key, and license key are injected as environment variables. The ConfigMap references
${IRONFLOW_DATABASE_URL}which Ironflow’s YAML parser resolves at startup.
Next Steps
- Hetzner Cloud Deployment — deploy Ironflow on a production Kubernetes cluster on Hetzner Cloud using Terraform
- kubectl Operations — day-to-day commands for managing a running Ironflow cluster