Testing Workflows

Before activating a workflow for production use, you can run test executions to verify that nodes behave as expected, conditions branch correctly, and delivery reaches the right destinations.

Running a test

Start a test run from the workflow card's dropdown menu by selecting Test Run. The workflow executes immediately using the manual trigger path, regardless of how the trigger is actually configured. This means you can test a scheduled or webhook-triggered workflow without waiting for the schedule or sending a real webhook event.

Test runs are recorded in the execution history with triggered_by: "test" so you can distinguish them from production runs.

Custom trigger payloads

When testing webhook or poll-triggered workflows, you can provide a custom JSON payload that simulates the incoming event data. This payload becomes available through {{ trigger.payload }} just as it would in a real execution.

Providing a realistic test payload is important for workflows that use condition or router nodes — these nodes evaluate expressions against the trigger payload, so an empty or incorrect payload may cause unexpected branching.

Reviewing results

After a test run completes, open it in the execution history to inspect:

Test Execution Timeline
Manual Trigger
Trigger · test payload provided
0.0s
Analyze Data
Action · data_router, data_analyzer
12.5s
Check Threshold
Condition · error: undefined variable
0.1s

The timeline shows exactly where execution succeeded, failed, or took unexpected branches.

Node-level output inspection

Click any node in the timeline to see its full output:

  • Action nodes — the complete response text, agents used, and tools called
  • Condition nodes — the expression evaluated, the context values used, and the boolean result
  • Router nodes — which route was selected and why
  • Delivery nodes — delivery status for each configured method
  • Merge nodes — which branches contributed and their collected results

This level of detail helps you pinpoint exactly where a workflow's logic diverges from expectations.

Common failure patterns

Problem Likely Cause Fix
Condition always takes one branch Expression references a missing or misspelled payload field Check {{ trigger.payload.field_name }} against your actual payload structure
Action node times out Instruction is too broad or data source is slow Narrow the instruction, increase timeout, or enable retries
Delivery fails Missing integration connection or invalid channel/URL Connect the required integration or verify delivery configuration
Merge node errors No incoming branches executed Ensure at least one branch path leads to the merge
Template rendering error Referencing a node that hasn't executed yet Check node ordering in the DAG — templates can only reference upstream nodes

Edit-rerun iteration

The typical testing workflow is an iterative cycle:

  1. Run a test execution
  2. Inspect the timeline to find issues
  3. Edit the problematic node's configuration
  4. Rerun with the same or updated test payload
  5. Repeat until all nodes behave correctly

You can keep the workflow in Draft status throughout this cycle. Once satisfied with test results, save and activate to enable the production trigger.

Each edit creates a new version, so you can always compare what changed between test runs or revert if an edit makes things worse.