Testing Workflows
Before activating a workflow for production use, you can run test executions to verify that nodes behave as expected, conditions branch correctly, and delivery reaches the right destinations.
Running a test
Start a test run from the workflow card's dropdown menu by selecting Test Run. The workflow executes immediately using the manual trigger path, regardless of how the trigger is actually configured. This means you can test a scheduled or webhook-triggered workflow without waiting for the schedule or sending a real webhook event.
Test runs are recorded in the execution history with triggered_by: "test" so you can distinguish them from production runs.
Custom trigger payloads
When testing webhook or poll-triggered workflows, you can provide a custom JSON payload that simulates the incoming event data. This payload becomes available through {{ trigger.payload }} just as it would in a real execution.
Providing a realistic test payload is important for workflows that use condition or router nodes — these nodes evaluate expressions against the trigger payload, so an empty or incorrect payload may cause unexpected branching.
Reviewing results
After a test run completes, open it in the execution history to inspect:
The timeline shows exactly where execution succeeded, failed, or took unexpected branches.
Node-level output inspection
Click any node in the timeline to see its full output:
- Action nodes — the complete response text, agents used, and tools called
- Condition nodes — the expression evaluated, the context values used, and the boolean result
- Router nodes — which route was selected and why
- Delivery nodes — delivery status for each configured method
- Merge nodes — which branches contributed and their collected results
This level of detail helps you pinpoint exactly where a workflow's logic diverges from expectations.
Common failure patterns
| Problem | Likely Cause | Fix |
|---|---|---|
| Condition always takes one branch | Expression references a missing or misspelled payload field | Check {{ trigger.payload.field_name }} against your actual payload structure |
| Action node times out | Instruction is too broad or data source is slow | Narrow the instruction, increase timeout, or enable retries |
| Delivery fails | Missing integration connection or invalid channel/URL | Connect the required integration or verify delivery configuration |
| Merge node errors | No incoming branches executed | Ensure at least one branch path leads to the merge |
| Template rendering error | Referencing a node that hasn't executed yet | Check node ordering in the DAG — templates can only reference upstream nodes |
Edit-rerun iteration
The typical testing workflow is an iterative cycle:
- Run a test execution
- Inspect the timeline to find issues
- Edit the problematic node's configuration
- Rerun with the same or updated test payload
- Repeat until all nodes behave correctly
You can keep the workflow in Draft status throughout this cycle. Once satisfied with test results, save and activate to enable the production trigger.
Each edit creates a new version, so you can always compare what changed between test runs or revert if an edit makes things worse.
Related pages
- Execution Monitoring — detailed timeline and status tracking
- Creating Workflows — the full creation and editing flow
- Workflow Concepts — execution states and timeouts
- Versioning — track changes across iterations