Skip to content

AI-Assisted Development

This guide helps you use AI coding assistants (Claude Code, Cursor, Copilot, Windsurf, etc.) effectively when building applications with Ironflow.

AI assistants excel at writing workflow code when they understand the core architectural patterns. Without proper context, they often make predictable mistakes:

  • Non-idempotent steps: Creating side effects that repeat on retry.
  • Side effects outside steps: Performing I/O that runs on every replay.
  • Missing memoization: Treating workflows like standard scripts.
  • Incorrect event matching: Using wrong field paths for correlation.

The MCP server lets your AI assistant interact with your Ironflow server directly from the IDE — inspect runs, emit events, query projections, and debug.

Add to your project’s .mcp.json:

{
"mcpServers": {
"ironflow": {
"command": "ironflow",
"args": ["mcp", "--allow-writes"]
}
}
}

For servers with auth enabled or custom URLs:

{
"mcpServers": {
"ironflow": {
"command": "ironflow",
"args": ["mcp", "--allow-writes", "--server-url", "http://localhost:9000", "--api-key", "ifkey_your_key"]
}
}
}

Install AI Skills for interactive, workflow-guided assistance across Claude Code, Codex CLI, and Gemini CLI:

Terminal window
curl -fsSL https://raw.githubusercontent.com/sahina/ironflow/main/install-skills.sh | bash

Skills activate automatically based on your prompt — use /ironflow as the universal entry point, or invoke specialized skills directly (/ironflow-start, /ironflow-code, /ironflow-ops, /ironflow-docs).

Copy the Agent Template as agent.md, CLAUDE.md, or .cursorrules in your project root. This gives the AI:

  • SDK reference for TypeScript and Go
  • Critical patterns (all side effects inside step.run(), unique step IDs, pure projections)
  • Common mistakes to avoid

If you’ve installed AI Skills, the agent template is optional — skills include the same SDK reference plus interactive workflows.

The CLI is your primary tool (100% coverage vs MCP’s ~42%). Key pattern:

Terminal window
# Test workflow with synchronous feedback
ironflow emit order.placed --data '{"orderId":"ord-1"}' --wait --json
# Debug failures
ironflow run list --status failed --json
ironflow run get <run-id> --json
# Time-travel debug
ironflow inspect <run-id>

The --wait flag is critical — it blocks until the run completes, giving the AI a tight build-test-fix loop.


  • “Create a workflow that processes orders with idempotent steps”
  • “Emit an order.placed event and show me the run result”
  • “Debug the failed run from step 3”
  • “Query the order-stats projection”
  • “Show me recent runs and their step outputs”

Use CaseRecommended ToolWhy
Testing workflows with feedbackCLI (--wait flag)Synchronous result, tight loop
Debugging failed runsCLIFull step details, --json output
KV store operationsMCPCLI has no ironflow kv command yet
Worker monitoringMCPironflow_list_workers tool
Overview statsMCPironflow_overview tool

See the full CLI vs MCP Coverage comparison for details.