Run Panel

Testing and Debugging Workflows

The Run Panel in Lleverage allows you to test your workflows before publishing them. Located in the top right corner of your screen, this powerful feature lets you execute your workflow, view the process in real-time, examine detailed traces, and debug issues with AI assistance.

Key Features

  • Real-time workflow execution and visualization

  • Input field generation based on trigger type

  • Detailed trace information with inputs, outputs, and timing

  • AI-powered error fixing with Copilot integration

  • History of previous runs for comparison

  • Ability to edit and rerun with different inputs

  • Response rating system for workflow improvement

  • Output display with copy functionality

How to Access the Run Panel

  1. Create or open a workflow in the Canvas

  2. Ensure there are no errors in your workflow (the Run button will be purple when ready)

  3. Click the "Run" button in the top right corner of your screen

  4. The Run Panel will open as a modal window

💡 The Run button will be highlighted in purple when your workflow is error-free and ready to test.

Run Panel

How to Test Your Workflow

  1. When the Run Panel opens, you'll see input fields based on your trigger type

  2. Fill in the required information or click "Generate Inputs" for sample data

  3. Click "Submit" or "Run Workflow" to execute the test

  4. Watch as each node in your workflow processes in real-time with progress indicators

  5. Review the output displayed at the bottom of the panel

💡 You can see each node activate on the canvas as the workflow executes, giving you a visual representation of the data flow.

Run button

How to View and Analyze Traces

  1. After running your workflow, you'll see progress indicators showing the execution of each node

  2. Click to expand any trace to view detailed information:

    • Input data for that specific node

    • Output data generated by the node

    • Execution time for the step

    • For AI steps: the thinking process behind the response

  3. Copy data in various formats including JSON for further analysis

  4. Use trace information to debug and optimize your workflow performance

💡 Traces provide comprehensive insight into how your workflow processes data at each step, making debugging much more effective.

How to Fix Errors with AI Assistance

  1. If your workflow encounters an error during execution, an error message will appear

  2. You'll see one of two Copilot fix options:

    • "Get Copilot to Fix it and Run": AI fixes the workflow and automatically reruns it

    • "Get Copilot to Fix it": AI fixes the workflow, but you must manually run it again

  3. Click your preferred option to let the AI analyze and resolve the issue

  4. Review the changes made by Copilot before proceeding

⚠️ Always review AI-suggested fixes to ensure they align with your workflow's intended functionality.

How to Rate and Improve Responses

  1. After a workflow run completes, locate the response rating options at the bottom of the output

  2. Mark responses as "Good Response" or "Bad Response" based on quality

  3. Use this feedback to track workflow performance over time

  4. Consider making adjustments to workflows that consistently receive poor ratings

How to Create New Tests and Rerun

  1. To create a new test: Click "New" at the bottom right to start fresh with empty inputs

  2. To rerun with same inputs: Click "Run Again" at the bottom right

  3. To create a test from current output: Click "Create Test" to save current configuration as a reusable test case

How to View Previous Runs

  1. Click the dropdown arrow next to "Run Workflow" in the top left corner of the Run Panel

  2. Select from the list of previous runs to view their inputs and outputs

  3. Compare different runs to analyze how changes affect your workflow

💡 Each run is recorded separately, making it easy to track the performance of different versions of your workflow.

Revisiting last runs

How to Edit and Rerun a Test

  1. View a previous run from the dropdown menu

  2. Click the "Edit" button to modify the inputs

  3. Make your changes to the input values

  4. Click "Submit" to run the workflow with the new inputs

  5. A new run will be logged with your updated inputs and traces

⚠️ Editing a previous run doesn't overwrite it - instead, it creates a new run with the modified inputs.

How to Copy Output Data

  1. Navigate to the output section at the bottom of the Run Panel

  2. Click the "Copy" button next to the output you want to save

  3. The output will be copied to your clipboard for use elsewhere

  4. Use copied data for documentation, sharing results, or further processing

How to Reset the Run Panel

  1. Click the "Reset" button in the top right corner of the Run Panel

  2. The panel will clear all inputs and return to its initial state

  3. You can now start a fresh test with new inputs

How to reset test

Next Steps After Testing

  1. Once you're satisfied with your workflow's performance in the Run Panel

  2. Review all traces to ensure optimal performance

  3. Address any consistently poor-rated responses

  4. Click "Publish" to make your workflow available for deployment

  5. Follow the publishing process to complete the deployment

⚠️ Thorough testing in the Run Panel, including trace analysis and error resolution, is essential before publishing to ensure your workflow functions optimally in a live environment.

Last updated