k3d Development Workflow
This guide sets up a fast iteration cycle for building an application that uses Ironflow while simultaneously fixing and improving Ironflow itself. You run Ironflow on a local k3d Kubernetes cluster via Helm, and your external app connects to it over localhost:9123. When you find something to fix, you switch to the Ironflow codebase, make the change, run make helm-redeploy, and your app immediately uses the updated server.
Prerequisites
- Docker running (Docker Desktop, Colima, or native)
- Go 1.25+, Node.js 20+, pnpm
- Helm, k3d, kubectl:
brew install helm k3d kubectlOne-Time Setup
1. Build Ironflow
cd /path/to/ironflowmake allThis compiles the binary (needed for the provision command) and builds the React dashboard.
2. Create the k3d cluster
./build/ironflow provision create --provider k3d --template small --name ironflow-devThis creates a k3d cluster with 1 server + 1 agent node and switches your kubectl context to k3d-ironflow-dev.
Verify:
kubectl config current-context# Expected: k3d-ironflow-dev3. Install cluster operators
CloudNativePG (for bundled PostgreSQL) and kube-prometheus-stack (for Prometheus + Grafana):
# CloudNativePG operatorhelm repo add cnpg https://cloudnative-pg.github.io/chartshelm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait
# Prometheus + Grafana (chart defaults are fine for local dev)helm repo add prometheus-community https://prometheus-community.github.io/helm-chartshelm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \ -n monitoring --create-namespace --wait4. Build, import, and deploy
# Build the Docker imagedocker build -t ironflow:local .
# Import into k3d (no registry needed)k3d image import ironflow:local -c ironflow-dev
# Install the Helm chart with dev valueshelm dependency update deploy/helm/ironflow/helm install ironflow deploy/helm/ironflow/ \ --namespace ironflow --create-namespace \ -f deploy/helm/ironflow/values-dev.yamlThe values-dev.yaml file configures the local image (ironflow:local with pullPolicy: Never), bundled NATS + PostgreSQL, devMode, and enables metrics, ServiceMonitor, Grafana dashboards, and alerts.
5. Wait for pods and verify
# Watch pods come up (CNPG creates PostgreSQL pods, may take 1-2 minutes)kubectl get pods -n ironflow -wOnce all pods show Running / Ready, open dedicated terminals for port-forwarding:
# Terminal 1 — Ironflow (leave running)kubectl port-forward svc/ironflow -n ironflow 9123:9123
# Terminal 2 — Grafana (leave running)kubectl port-forward svc/kube-prometheus-stack-grafana -n monitoring 3030:80
# Terminal 3 — Prometheus (leave running)kubectl port-forward svc/prometheus-operated -n monitoring 9090:9090| Service | URL |
|---|---|
| Ironflow Dashboard | http://localhost:9123 |
| Grafana | http://localhost:3030 |
| Prometheus | http://localhost:9090 |
Grafana credentials
The Grafana chart generates a random admin password. Retrieve it with:
# Usernamekubectl get secret kube-prometheus-stack-grafana -n monitoring \ -o jsonpath='{.data.admin-user}' | base64 -d && echo
# Passwordkubectl get secret kube-prometheus-stack-grafana -n monitoring \ -o jsonpath='{.data.admin-password}' | base64 -d && echoPre-built Ironflow dashboards are auto-imported via the Grafana sidecar.
Leave the port-forward terminals running for the entire session. Your external app connects through the Ironflow port-forward.
The Inner Loop
This is the core workflow you’ll repeat:
┌─────────────────────────────────────────────────────┐│ 1. Work on your external app ││ (points at http://localhost:9123) ││ ││ 2. Find a bug or something to improve in Ironflow ││ ││ 3. Switch to the Ironflow codebase, make changes ││ ││ 4. Run: make helm-redeploy ││ (rebuilds image, imports to k3d, restarts pods) ││ ││ 5. Go back to your app — it uses the new Ironflow │└─────────────────────────────────────────────────────┘make helm-redeploy
One command rebuilds and redeploys everything:
make helm-redeployThis runs: make proto embed → docker build → k3d image import → kubectl rollout restart → waits for rollout to complete.
The cluster name, namespace, and release name default to ironflow-dev, ironflow, and ironflow. Override them if your setup differs:
make helm-redeploy HELM_LOCAL_CLUSTER=my-cluster HELM_LOCAL_NAMESPACE=my-ns HELM_LOCAL_RELEASE=my-releaseConnecting Your External App
Your external app talks to Ironflow at http://localhost:9123 (via the port-forward). How you wire it depends on whether you use an SDK.
Install the local SDK build:
# In the Ironflow repo — pack the SDK into tarballsmake sdk-js-packThis creates .tgz files in /tmp/ironflow-packs/.
# In your external app — install from the tarballs# List the packs directory to find the exact filenames, then install:ls /tmp/ironflow-packs/pnpm add file:/tmp/ironflow-packs/ironflow-node-<version>.tgz# and/or for browser usage:pnpm add file:/tmp/ironflow-packs/ironflow-browser-<version>.tgzUse the exact filename from ls — not a glob pattern like ironflow-node-*.tgz. Shells like fish do not expand globs inside file: URIs, and even bash/zsh may not expand them as expected.
Configure the connection in your app:
# .env or environment variableIRONFLOW_SERVER_URL=http://localhost:9123Point at your local SDK checkout by adding a replace directive to your app’s go.mod:
module my-app
go 1.25
require github.com/sahina/ironflow/sdk/go/ironflow v0.0.0
replace github.com/sahina/ironflow/sdk/go/ironflow => /path/to/ironflow/sdk/go/ironflowConfigure the connection in your app:
# .env or environment variableIRONFLOW_SERVER_URL=http://localhost:9123If your app talks to Ironflow over plain HTTP (no SDK), just set the base URL:
IRONFLOW_SERVER_URL=http://localhost:9123No SDK installation needed.
When SDK Changes Are Involved
If your Ironflow fix touches both server code and SDK code, you need to update both sides:
# 1. In Ironflow repo — rebuild and redeploy the servermake helm-redeploy
# 2. Re-pack the SDK with your changesmake sdk-js-pack
# 3. In your external app — reinstall (use exact filename from ls, not a glob)ls /tmp/ironflow-packs/pnpm add file:/tmp/ironflow-packs/ironflow-node-<version>.tgz# 1. In Ironflow repo — rebuild and redeploy the servermake helm-redeploy
# 2. No extra step needed — the `replace` directive in go.mod# already points at your local checkout, so Go picks up# changes automatically on the next build.Iterating on Helm Chart Changes
If you’re modifying Helm templates or values (not just Ironflow source code), use helm upgrade instead of make helm-redeploy:
# Preview what changedhelm template test deploy/helm/ironflow/ --show-only templates/configmap.yaml
# Apply the changeshelm upgrade ironflow deploy/helm/ironflow/ \ --namespace ironflow \ -f deploy/helm/ironflow/values-dev.yaml
# Check pod statuskubectl get pods -n ironflowIf you changed both Ironflow source code and Helm templates, run make helm-redeploy first (to get the new image), then helm upgrade (to apply template changes).
Clean Up
# Remove the Helm releasehelm uninstall ironflow -n ironflow
# Delete the k3d cluster (removes everything including monitoring)./build/ironflow provision destroy --provider k3d --name ironflow-devTroubleshooting
Port-forward disconnects
Port-forward can drop on pod restarts (e.g., after make helm-redeploy). Just re-run:
kubectl port-forward svc/ironflow -n ironflow 9123:9123Pods stuck in CrashLoopBackOff
Check logs:
kubectl logs -l app.kubernetes.io/component=server -n ironflow --tail=50Common causes: CNPG PostgreSQL not ready yet (wait 1-2 minutes), or a bug in your Ironflow changes.
CNPG pods not appearing
Verify the operator is running:
kubectl get pods -n cnpg-systemIf not, reinstall: helm install cnpg cnpg/cloudnative-pg -n cnpg-system --create-namespace --wait
Image not updating after redeploy
Ensure pullPolicy=Never was set during initial helm install. Check:
kubectl get deployment ironflow -n ironflow -o jsonpath='{.spec.template.spec.containers[0].imagePullPolicy}'# Expected: NeverExternal app can’t connect
Verify port-forward is running and Ironflow is healthy:
curl http://localhost:9123/health