Skip to content

Helm Chart Development

  • The Ironflow Helm chart lives at deploy/helm/ironflow/.
  • In dev mode, it deploys Ironflow with bundled NATS (subchart) and CloudNativePG-managed PostgreSQL.
  • In production mode, both PostgreSQL and NATS can be either bundled (CloudNativePG for PostgreSQL, 3-node cluster for NATS) or external (you provide connection URLs to your existing services).

Prerequisites

Tools

Terminal window
brew install helm k3d kubectl

Cluster Setup

Docker must be running for k3d (e.g., Docker Desktop, Colima, or native Docker on Linux). Note that k3d creates its own cluster separate from Docker Desktop’s built-in Kubernetes — the two are independent.

Terminal window
# Creates a minimal cluster: 1 server + 1 agent (equivalent to the "small" template)
k3d cluster create ironflow-dev --wait

For larger clusters that simulate HA, use ironflow provision with template-based sizing:

Terminal window
# Small (1 server + 1 agent — same as plain k3d cluster create)
ironflow provision create --provider k3d --template small --name ironflow-dev
# Medium (3 servers + 2 agents — simulates HA)
ironflow provision create --provider k3d --template medium --name ironflow-dev

k3d automatically switches your kubectl context to the new cluster. Verify you’re on the right context before continuing:

Terminal window
kubectl config current-context
# Expected: k3d-ironflow-dev

CloudNativePG operator

Required for bundled PostgreSQL (postgresql.bundled: true). Not needed if you provide externalDatabase.url. This installs into whichever cluster your current kubectl context points to.

Terminal window
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait

Chart Structure

deploy/helm/ironflow/
├── Chart.yaml # Metadata + subchart dependencies (NATS)
├── Chart.lock # Locked dependency versions
├── values.yaml # Dev defaults (bundled CNPG + NATS, auth disabled)
├── values-small.yaml # Small template (1 replica, bundled)
├── values-medium.yaml # Medium template (3 replicas, HA cluster)
├── values-large.yaml # Large template (HPA, external deps)
├── values-multi-tenant.yaml # Per-tenant deployment on shared cluster
├── templates/ # Kubernetes manifest templates
│ ├── configmap.yaml # Generates ironflow.yaml from Helm values
│ ├── deployment.yaml # Ironflow pods
│ ├── postgresql.yaml # CloudNativePG Cluster CR (when bundled)
│ ├── secret.yaml # Master key, license key (DB URL from CNPG Secret)
│ ├── service.yaml # ClusterIP on port 9123 (supports annotations for direct LB override)
│ └── ... # Ingress, HPA, NetworkPolicy, ServiceMonitor, etc.
└── values-dev.yaml # Local k3d development values

Local Development

1. Run template tests (no cluster needed)

Validates that all template combinations render correctly:

Terminal window
make test-helm

This runs template tests covering dev mode (CNPG), production mode, cluster mode, existingSecret, optional templates, and validation errors.

2. Deploy to a local cluster

Build the Docker image, deploy the chart, and verify the health endpoint.

One command automates the full flow — create cluster, build, deploy, test, clean up:

Terminal window
make test-helm-e2e

This automates:

  1. Creates a ironflow-test k3d cluster
  2. Installs the CloudNativePG operator
  3. Generates protobuf code and builds the dashboard (make proto embed)
  4. Builds the Docker image locally
  5. Imports the image into k3d
  6. Installs the Helm chart with bundled NATS + CNPG PostgreSQL
  7. Waits for pods to be ready
  8. Port-forwards and hits /health
  9. Cleans up the cluster on exit

3. Manual step-by-step

If you want to keep the cluster running for interactive testing. Skip the cluster and CNPG steps if you already completed them in Prerequisites.

Terminal window
# Create cluster (skip if already created in Prerequisites)
k3d cluster create ironflow-dev --wait
# Install CloudNativePG operator (skip if already installed in Prerequisites)
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait
# Generate protobuf code and build dashboard (both gitignored, required by Docker build)
make proto embed
# Build and import image (no registry needed)
docker build -t ironflow:local .
k3d image import ironflow:local -c ironflow-dev
# Install chart
helm dependency update deploy/helm/ironflow/
helm install ironflow deploy/helm/ironflow/ \
--namespace ironflow --create-namespace \
--set image.repository=ironflow \
--set image.tag=local \
--set image.pullPolicy=Never
# Watch pods come up (CNPG creates PostgreSQL pods, may take 1-2 minutes)
kubectl get pods -n ironflow -w
# Port-forward (in a separate terminal)
kubectl port-forward svc/ironflow -n ironflow 9123:9123
# Open http://localhost:9123

4. Rebuild and redeploy after code changes

For a streamlined iteration workflow where you develop an external app against Ironflow on k3d, see the k3d Development Workflow guide. It includes a one-command make helm-redeploy target.

After modifying Ironflow source code (Go, React dashboard, protobuf), rebuild the Docker image and deploy it to your running cluster:

Terminal window
# Re-generate protobuf and rebuild dashboard if sources changed
make proto embed
# Rebuild image from your current branch
docker build -t ironflow:local .
# Import into k3d (replaces the previous image)
k3d image import ironflow:local -c ironflow-dev
# Restart pods to pick up the new image
kubectl rollout restart deployment/ironflow -n ironflow
# Watch the rollout
kubectl rollout status deployment/ironflow -n ironflow

This works because step 3 sets pullPolicy=Never, so Kubernetes uses whatever image is loaded into k3d’s containerd — no registry needed.

5. Iterate on Helm changes

After modifying templates or values:

Terminal window
# Re-render and check templates (no cluster needed)
helm template test deploy/helm/ironflow/ --show-only templates/configmap.yaml

Upgrade the running release:

Terminal window
helm upgrade ironflow deploy/helm/ironflow/ \
--namespace ironflow \
--set image.repository=ironflow \
--set image.tag=local \
--set image.pullPolicy=Never
# Check pod status
kubectl get pods -n ironflow
kubectl logs -l app.kubernetes.io/component=server -n ironflow --tail=20

6. Clean up

Terminal window
# Delete the Helm release
helm uninstall ironflow -n ironflow
# Delete the cluster (removes everything)
k3d cluster delete ironflow-dev

Linting

Terminal window
helm lint deploy/helm/ironflow/
helm lint deploy/helm/ironflow/ -f deploy/helm/ironflow/values-large.yaml

Testing with Production Values

Test the chart with external dependencies (subcharts disabled):

Terminal window
helm template test deploy/helm/ironflow/ \
-f deploy/helm/ironflow/values-large.yaml \
--show-only templates/configmap.yaml

Verify the ConfigMap generates kind: Cluster with external NATS/PostgreSQL URLs.

Publishing

Push the chart to GHCR OCI registry:

Terminal window
make helm-publish

This is also automated in the release workflow (.github/workflows/release.yml) — the chart is published alongside Docker images and npm packages on every tagged release.

Install from OCI Registry

Terminal window
helm install ironflow oci://ghcr.io/sahina/charts/ironflow \
--namespace ironflow --create-namespace

Environment Configuration

The chart ships with value files for different environments. Use these as a starting point or pass --set overrides.

Bundled NATS subchart and CloudNativePG-managed PostgreSQL. Requires the CNPG operator (see Prerequisites).

# values.yaml (default) or values-small.yaml
ironflow:
devMode: true # Disables authentication
nats:
bundled: true # NATS subchart deployed alongside
postgresql:
bundled: true # CNPG Cluster CR created (1 instance, auto-generated password)
instances: 1
Terminal window
# Install CNPG operator first (if not already installed)
helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace
# Then install Ironflow
helm install ironflow deploy/helm/ironflow/ \
--namespace ironflow --create-namespace

Observability & Monitoring

When observability.metrics.enabled is set in values, the chart wires IRONFLOW_METRICS_ENABLED and OTel environment variables into the pod. The serviceMonitor.enabled value creates a Prometheus ServiceMonitor with a release label matching the Helm release name (required by kube-prometheus-stack).

The readiness probe uses /ready (checks PostgreSQL + NATS connectivity). The liveness probe uses /health (checks PostgreSQL only). This prevents NATS transient blips from cascading into pod restarts.

The NetworkPolicy template automatically detects bundled vs external dependencies. When NATS or PostgreSQL is bundled, egress rules use podSelector (same-namespace). When external, egress rules use namespaceSelector targeting networkPolicy.natsNamespace or networkPolicy.postgresNamespace. For multi-tenant deployments, enable networkPolicy.defaultDeny: true to add namespace-wide isolation that blocks all cross-namespace traffic except DNS and explicitly allowed namespaces (ingress controller, monitoring).

For the full monitoring stack (Prometheus, Grafana, Alertmanager, BlackBox Exporter), see deploy/monitoring/ and the Observability guide.

Key Design Decisions

  • ConfigMap bridge: The chart generates an ironflow.yaml ConfigMap from Helm values. The binary reads it via ironflow serve -f /config/ironflow.yaml. This avoids duplicating config parsing between Helm and Ironflow.
  • CloudNativePG for PostgreSQL: When postgresql.bundled=true, the chart creates a CNPG Cluster CR. The CNPG operator (installed separately per cluster, like cert-manager) manages PostgreSQL pods, services, and credentials. The operator auto-generates a secure password and stores it in a {release}-postgresql-app Secret with a uri key containing the full connection string. No hardcoded passwords in values.
  • Conditional subcharts: nats.bundled controls whether NATS deploys as a subchart (dev or production) or Ironflow connects to an external NATS. When bundled, nats.config.cluster.enabled and nats.config.cluster.replicas control whether NATS runs as a single instance (dev) or a multi-node cluster (production). nats.config.merge.authorization.token configures authentication — the token is passed to both the NATS server (via the subchart’s config merge) and the Ironflow client (embedded in the connection URL). postgresql.bundled controls whether a CNPG Cluster CR is created (dev/staging/production — requires the CNPG operator) or Ironflow uses externalDatabase.url (no CNPG operator needed).
  • NATS is always external in K8s: Even when the NATS subchart is bundled, it runs as a separate pod. The ConfigMap always sets nats.url pointing to the subchart’s service — Ironflow never starts embedded NATS in Kubernetes.
  • Secrets via env vars: Database URL (from CNPG Secret or external Secret), master key, and license key are injected as environment variables. The ConfigMap references ${IRONFLOW_DATABASE_URL} which Ironflow’s YAML parser resolves at startup.

Next Steps