Skip to content

k3d Development Workflow

This guide sets up a fast iteration cycle for building an application that uses Ironflow while simultaneously fixing and improving Ironflow itself. You run Ironflow on a local k3d Kubernetes cluster via Helm, and your external app connects to it over localhost:9123. When you find something to fix, you switch to the Ironflow codebase, make the change, run make helm-redeploy, and your app immediately uses the updated server.

Prerequisites

  • Docker running (Docker Desktop, Colima, or native)
  • Go 1.25+, Node.js 20+, pnpm
  • Helm, k3d, kubectl:
Terminal window
brew install helm k3d kubectl

One-Time Setup

1. Build Ironflow

Terminal window
cd /path/to/ironflow
make all

This compiles the binary (needed for the provision command) and builds the React dashboard.

2. Create the k3d cluster

Terminal window
./build/ironflow provision create --provider k3d --template small --name ironflow-dev

This creates a k3d cluster with 1 server + 1 agent node and switches your kubectl context to k3d-ironflow-dev.

Verify:

Terminal window
kubectl config current-context
# Expected: k3d-ironflow-dev

3. Install cluster operators

CloudNativePG (for bundled PostgreSQL) and kube-prometheus-stack (for Prometheus + Grafana):

Terminal window
# CloudNativePG operator
helm repo add cnpg https://cloudnative-pg.github.io/charts
helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait
# Prometheus + Grafana (chart defaults are fine for local dev)
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
-n monitoring --create-namespace --wait

4. Build, import, and deploy

Terminal window
# Build the Docker image
docker build -t ironflow:local .
# Import into k3d (no registry needed)
k3d image import ironflow:local -c ironflow-dev
# Install the Helm chart with dev values
helm dependency update deploy/helm/ironflow/
helm install ironflow deploy/helm/ironflow/ \
--namespace ironflow --create-namespace \
-f deploy/helm/ironflow/values-dev.yaml

The values-dev.yaml file configures the local image (ironflow:local with pullPolicy: Never), bundled NATS + PostgreSQL, devMode, and enables metrics, ServiceMonitor, Grafana dashboards, and alerts.

5. Wait for pods and verify

Terminal window
# Watch pods come up (CNPG creates PostgreSQL pods, may take 1-2 minutes)
kubectl get pods -n ironflow -w

Once all pods show Running / Ready, open dedicated terminals for port-forwarding:

Terminal window
# Terminal 1 — Ironflow (leave running)
kubectl port-forward svc/ironflow -n ironflow 9123:9123
# Terminal 2 — Grafana (leave running)
kubectl port-forward svc/kube-prometheus-stack-grafana -n monitoring 3030:80
# Terminal 3 — Prometheus (leave running)
kubectl port-forward svc/prometheus-operated -n monitoring 9090:9090
ServiceURL
Ironflow Dashboardhttp://localhost:9123
Grafanahttp://localhost:3030
Prometheushttp://localhost:9090

Grafana credentials

The Grafana chart generates a random admin password. Retrieve it with:

Terminal window
# Username
kubectl get secret kube-prometheus-stack-grafana -n monitoring \
-o jsonpath='{.data.admin-user}' | base64 -d && echo
# Password
kubectl get secret kube-prometheus-stack-grafana -n monitoring \
-o jsonpath='{.data.admin-password}' | base64 -d && echo

Pre-built Ironflow dashboards are auto-imported via the Grafana sidecar.

Leave the port-forward terminals running for the entire session. Your external app connects through the Ironflow port-forward.


The Inner Loop

This is the core workflow you’ll repeat:

┌─────────────────────────────────────────────────────┐
│ 1. Work on your external app │
│ (points at http://localhost:9123) │
│ │
│ 2. Find a bug or something to improve in Ironflow │
│ │
│ 3. Switch to the Ironflow codebase, make changes │
│ │
│ 4. Run: make helm-redeploy │
│ (rebuilds image, imports to k3d, restarts pods) │
│ │
│ 5. Go back to your app — it uses the new Ironflow │
└─────────────────────────────────────────────────────┘

make helm-redeploy

One command rebuilds and redeploys everything:

Terminal window
make helm-redeploy

This runs: make proto embeddocker buildk3d image importkubectl rollout restart → waits for rollout to complete.

The cluster name, namespace, and release name default to ironflow-dev, ironflow, and ironflow. Override them if your setup differs:

Terminal window
make helm-redeploy HELM_LOCAL_CLUSTER=my-cluster HELM_LOCAL_NAMESPACE=my-ns HELM_LOCAL_RELEASE=my-release

Connecting Your External App

Your external app talks to Ironflow at http://localhost:9123 (via the port-forward). How you wire it depends on whether you use an SDK.

Install the local SDK build:

Terminal window
# In the Ironflow repo — pack the SDK into tarballs
make sdk-js-pack

This creates .tgz files in /tmp/ironflow-packs/.

Terminal window
# In your external app — install from the tarballs
# List the packs directory to find the exact filenames, then install:
ls /tmp/ironflow-packs/
pnpm add file:/tmp/ironflow-packs/ironflow-node-<version>.tgz
# and/or for browser usage:
pnpm add file:/tmp/ironflow-packs/ironflow-browser-<version>.tgz

Use the exact filename from ls — not a glob pattern like ironflow-node-*.tgz. Shells like fish do not expand globs inside file: URIs, and even bash/zsh may not expand them as expected.

Configure the connection in your app:

Terminal window
# .env or environment variable
IRONFLOW_SERVER_URL=http://localhost:9123

When SDK Changes Are Involved

If your Ironflow fix touches both server code and SDK code, you need to update both sides:

Terminal window
# 1. In Ironflow repo — rebuild and redeploy the server
make helm-redeploy
# 2. Re-pack the SDK with your changes
make sdk-js-pack
# 3. In your external app — reinstall (use exact filename from ls, not a glob)
ls /tmp/ironflow-packs/
pnpm add file:/tmp/ironflow-packs/ironflow-node-<version>.tgz

Iterating on Helm Chart Changes

If you’re modifying Helm templates or values (not just Ironflow source code), use helm upgrade instead of make helm-redeploy:

Terminal window
# Preview what changed
helm template test deploy/helm/ironflow/ --show-only templates/configmap.yaml
# Apply the changes
helm upgrade ironflow deploy/helm/ironflow/ \
--namespace ironflow \
-f deploy/helm/ironflow/values-dev.yaml
# Check pod status
kubectl get pods -n ironflow

If you changed both Ironflow source code and Helm templates, run make helm-redeploy first (to get the new image), then helm upgrade (to apply template changes).


Clean Up

Terminal window
# Remove the Helm release
helm uninstall ironflow -n ironflow
# Delete the k3d cluster (removes everything including monitoring)
./build/ironflow provision destroy --provider k3d --name ironflow-dev

Troubleshooting

Port-forward disconnects

Port-forward can drop on pod restarts (e.g., after make helm-redeploy). Just re-run:

Terminal window
kubectl port-forward svc/ironflow -n ironflow 9123:9123

Pods stuck in CrashLoopBackOff

Check logs:

Terminal window
kubectl logs -l app.kubernetes.io/component=server -n ironflow --tail=50

Common causes: CNPG PostgreSQL not ready yet (wait 1-2 minutes), or a bug in your Ironflow changes.

CNPG pods not appearing

Verify the operator is running:

Terminal window
kubectl get pods -n cnpg-system

If not, reinstall: helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait

Image not updating after redeploy

Ensure pullPolicy=Never was set during initial helm install. Check:

Terminal window
kubectl get deployment ironflow -n ironflow -o jsonpath='{.spec.template.spec.containers[0].imagePullPolicy}'
# Expected: Never

External app can’t connect

Verify port-forward is running and Ironflow is healthy:

Terminal window
curl http://localhost:9123/health