Run Panel
Testing and Debugging Workflows
The Run Panel in Lleverage allows you to test your workflows before publishing them. Located in the top right corner of your screen, this powerful feature lets you execute your workflow, view the process in real-time, examine detailed traces, and debug issues with AI assistance.
Key Features
Real-time workflow execution and visualization
Input field generation based on trigger type
Detailed trace information with inputs, outputs, and timing
AI-powered error fixing with Copilot integration
History of previous runs for comparison
Ability to edit and rerun with different inputs
Response rating system for workflow improvement
Output display with copy functionality
How to Access the Run Panel
Create or open a workflow in the Canvas
Ensure there are no errors in your workflow (the Run button will be purple when ready)
Click the "Run" button in the top right corner of your screen
The Run Panel will open as a modal window
💡 The Run button will be highlighted in purple when your workflow is error-free and ready to test.

How to Test Your Workflow
When the Run Panel opens, you'll see input fields based on your trigger type
Fill in the required information or click "Generate Inputs" for sample data
Click "Submit" or "Run Workflow" to execute the test
Watch as each node in your workflow processes in real-time with progress indicators
Review the output displayed at the bottom of the panel
💡 You can see each node activate on the canvas as the workflow executes, giving you a visual representation of the data flow.

How to View and Analyze Traces
After running your workflow, you'll see progress indicators showing the execution of each node
Click to expand any trace to view detailed information:
Input data for that specific node
Output data generated by the node
Execution time for the step
For AI steps: the thinking process behind the response
Copy data in various formats including JSON for further analysis
Use trace information to debug and optimize your workflow performance
💡 Traces provide comprehensive insight into how your workflow processes data at each step, making debugging much more effective.
How to Fix Errors with AI Assistance
If your workflow encounters an error during execution, an error message will appear
You'll see one of two Copilot fix options:
"Get Copilot to Fix it and Run": AI fixes the workflow and automatically reruns it
"Get Copilot to Fix it": AI fixes the workflow, but you must manually run it again
Click your preferred option to let the AI analyze and resolve the issue
Review the changes made by Copilot before proceeding
⚠️ Always review AI-suggested fixes to ensure they align with your workflow's intended functionality.
How to Rate and Improve Responses
After a workflow run completes, locate the response rating options at the bottom of the output
Mark responses as "Good Response" or "Bad Response" based on quality
Use this feedback to track workflow performance over time
Consider making adjustments to workflows that consistently receive poor ratings
How to Create New Tests and Rerun
To create a new test: Click "New" at the bottom right to start fresh with empty inputs
To rerun with same inputs: Click "Run Again" at the bottom right
To create a test from current output: Click "Create Test" to save current configuration as a reusable test case
How to View Previous Runs
Click the dropdown arrow next to "Run Workflow" in the top left corner of the Run Panel
Select from the list of previous runs to view their inputs and outputs
Compare different runs to analyze how changes affect your workflow
💡 Each run is recorded separately, making it easy to track the performance of different versions of your workflow.

How to Edit and Rerun a Test
View a previous run from the dropdown menu
Click the "Edit" button to modify the inputs
Make your changes to the input values
Click "Submit" to run the workflow with the new inputs
A new run will be logged with your updated inputs and traces
⚠️ Editing a previous run doesn't overwrite it - instead, it creates a new run with the modified inputs.
How to Copy Output Data
Navigate to the output section at the bottom of the Run Panel
Click the "Copy" button next to the output you want to save
The output will be copied to your clipboard for use elsewhere
Use copied data for documentation, sharing results, or further processing
How to Reset the Run Panel
Click the "Reset" button in the top right corner of the Run Panel
The panel will clear all inputs and return to its initial state
You can now start a fresh test with new inputs

Next Steps After Testing
Once you're satisfied with your workflow's performance in the Run Panel
Review all traces to ensure optimal performance
Address any consistently poor-rated responses
Click "Publish" to make your workflow available for deployment
Follow the publishing process to complete the deployment
⚠️ Thorough testing in the Run Panel, including trace analysis and error resolution, is essential before publishing to ensure your workflow functions optimally in a live environment.
Last updated