Skip to content

Security & Access Scenarios

Rotate Slack webhook URL

Trigger: Webhook URL compromised or Slack channel changed.

Steps:

Terminal window
# 1. Create a new Incoming Webhook in your Slack workspace
# Go to: https://api.slack.com/apps → Your App → Incoming Webhooks
# Add a new webhook URL pointing to the #ironflow-alerts channel
# Copy the new URL (https://hooks.slack.com/services/T.../B.../...)
# 2. Update the Kubernetes secret with the new URL
kubectl create secret generic alertmanager-slack \
-n monitoring \
--from-literal=webhook-url='https://hooks.slack.com/services/NEW/WEBHOOK/URL' \
--dry-run=client -o yaml | kubectl apply -f -
# 3. Restart Alertmanager to pick up the new secret
# Alertmanager reads the webhook URL from a file mounted from the secret.
# It does not hot-reload secret changes.
kubectl rollout restart statefulset alertmanager-kube-prometheus-stack-alertmanager -n monitoring
kubectl rollout status statefulset alertmanager-kube-prometheus-stack-alertmanager -n monitoring
# 4. Verify the new webhook is working
# Check Alertmanager logs for send errors
kubectl logs -n monitoring -l app.kubernetes.io/name=alertmanager --tail=20
# The Watchdog alert pings every 1 minute. Wait ~1 minute and check
# Alertmanager for successful sends:
kubectl port-forward svc/kube-prometheus-stack-alertmanager -n monitoring 9093:9093
# Open http://localhost:9093/#/alerts — Watchdog should show as active
# Or test the new URL directly:
kubectl get secret alertmanager-slack -n monitoring \
-o jsonpath='{.data.webhook-url}' | base64 -d | \
xargs -I{} curl -X POST -H 'Content-Type: application/json' \
-d '{"text":"Webhook rotation verified"}' '{}'

After rotation: Revoke the old webhook URL in Slack App settings to prevent misuse.


Rotate Grafana admin password

Trigger: Password needs rotation per policy or suspected leak.

Steps:

Terminal window
# 1. Generate a new password
NEW_PASSWORD=$(openssl rand -base64 16)
echo "New Grafana password: $NEW_PASSWORD"
# Save this somewhere secure (password manager, vault).
# 2. Update the Kubernetes secret
kubectl create secret generic grafana-admin \
-n monitoring \
--from-literal=admin-user=admin \
--from-literal=admin-password="$NEW_PASSWORD" \
--dry-run=client -o yaml | kubectl apply -f -
# 3. Restart Grafana to pick up the new credentials
kubectl rollout restart deployment kube-prometheus-stack-grafana -n monitoring
kubectl rollout status deployment kube-prometheus-stack-grafana -n monitoring
# 4. Log in with the new password
kubectl port-forward svc/kube-prometheus-stack-grafana -n monitoring 3000:80
# Open http://localhost:3000
# Username: admin
# Password: (the value you generated above)

Grafana stores its own user database

Grafana uses a SQLite database inside its pod for user management. The grafana-admin secret sets the initial admin password on first boot and resets it on restart. If you have created additional Grafana users, their passwords are not affected by this rotation.


Rotate Ironflow master key

Trigger: Master key compromised. Need to re-encrypt secrets.

Existing secrets become unreadable

The master key (IRONFLOW_MASTER_KEY) is used to encrypt secrets stored in NATS KV (SYS_secrets_* buckets). Rotating the key means all previously encrypted secrets cannot be decrypted. You must re-set every secret after rotation.

Steps:

Terminal window
# 1. Generate a new AES-256 master key (64 hex characters)
NEW_KEY=$(openssl rand -hex 32)
echo "New master key: $NEW_KEY"
# Save this somewhere secure.
# 2. Update the Ironflow deployment with the new key
ironflow deploy upgrade --template medium --name my-release \
--set ironflow.masterKey="$NEW_KEY"
# Or with Helm directly:
# helm upgrade ironflow deploy/helm/ironflow/ -n ironflow \
# --reuse-values --set ironflow.masterKey="$NEW_KEY"
# This updates the ironflow-secret Secret and triggers a rolling restart
# (the Deployment has a checksum/secret annotation that detects changes).
# 3. Wait for the rollout to complete
kubectl rollout status deployment/ironflow -n ironflow
# 4. Verify pods are healthy with the new key
kubectl get pods -n ironflow -l app.kubernetes.io/component=server
kubectl port-forward svc/ironflow -n ironflow 9123:9123
curl -sf http://localhost:9123/health
# 5. Re-set ALL secrets — they were encrypted with the old key
# List the secrets you had configured:
ironflow secret list --env default
# Re-set each one:
ironflow secret set MY_API_KEY "new-or-same-value" --env default
ironflow secret set DATABASE_TOKEN "new-or-same-value" --env default
# Repeat for every secret in every environment.

Tip: Before rotating, export a list of all secret names per environment so you know exactly what needs to be re-set. The secret values themselves cannot be exported (they are encrypted), so you need the original values from your secrets manager or vault.


Rotate database credentials

Trigger: PostgreSQL password needs rotation per security policy or suspected compromise.

CloudNativePG auto-generates credentials and stores them in a Kubernetes Secret named {cluster}-app. CNPG manages the PostgreSQL user and syncs the secret to the database.

Terminal window
# 1. Identify the CNPG cluster and its credential secret
kubectl get clusters.postgresql.cnpg.io -n ironflow
# NAME AGE INSTANCES READY
# ironflow-postgresql 7d 2 2
kubectl get secret ironflow-postgresql-app -n ironflow
# This secret contains: username, password, host, port, dbname, uri, jdbc-uri
# 2. Delete the password secret — CNPG will recreate it with a new password
kubectl delete secret ironflow-postgresql-app -n ironflow
# CNPG operator detects the missing secret within seconds,
# generates a new password, creates a new secret, and updates
# the PostgreSQL user's password in the database.
# 3. Wait for the secret to be recreated
kubectl get secret ironflow-postgresql-app -n ironflow -w
# Once it reappears, the new password is active in both K8s and PG.
# 4. Restart Ironflow pods to pick up the new connection string
kubectl rollout restart deployment ironflow -n ironflow
kubectl rollout status deployment ironflow -n ironflow
# 5. Verify Ironflow can connect to the database
kubectl port-forward svc/ironflow -n ironflow 9123:9123
curl -sf http://localhost:9123/ready
# /ready checks both PostgreSQL and NATS connectivity.
# 200 = both are reachable with valid credentials.

See kubectl Operations for additional PostgreSQL health checks and connectivity diagnostics.