Using the Prompt action
The Prompt action is your interface to AI models in Lleverage. It lets you create AI-powered features by writing prompts, selecting models, and configuring how the AI should respond to your needs.
Basic setup
When you add a Prompt action to your workflow, you'll see two main sections: Variables and Prompt. The Variables section is where you connect data from other actions in your workflow, while the Prompt section is where you write your instructions for the AI. You'll also find model selection and configuration options that help you control exactly how the AI behaves.
Writing prompts
Your prompt tells the AI what you want it to do. You can write your prompt directly in the editor, or use the "Engineer prompt" button to access the full prompt engineering interface with additional tools and features. If you're not sure where to start, the "Generate" button can help create a prompt based on your description of what you need.
A typical prompt includes a system message that sets the context and user messages that provide specific instructions. You can enhance these with variables to make your prompts dynamic and responsive to your workflow's data.
Learn more about prompt engineering →
Working with variables
Making your prompts dynamic involves a two-step process of first defining and then linking variables.
First, in the Prompt tab, define where you want to insert dynamic text by using curly braces in your prompt. For example, you might write "Write a summary about {{topic}}" where topic is your prompt variable.
Then, switch to the Variables tab to link these prompt variables with actual data from your workflow. Here you can connect each prompt variable to workflow variables using the {{ }}
syntax - for instance, linking {{topic}}
to {{input.body.subject}}
.
During testing, you can use the "Test with mock data" feature to verify how your prompt handles different inputs without needing actual production data. This lets you ensure your variable connections work as expected before deploying your workflow.
More information on working with variables →
Working with files
The Prompt action can work with various types of files as part of your workflow. When working with images, you can include them directly in your prompt by adding them to a User message - simply reference your image variable there.
For document processing, you'll want to prepare your files first. While PDFs and other documents can't be fed directly into a prompt, you can use specific Tools actions like extractText or convertToImages to get their content in a format the AI can understand. These actions are covered in detail in the Tools section of our documentation.
This way, you can create workflows that handle both images and documents effectively:
Send images directly through User messages in your prompt
Process documents first, then use their content in your prompts
Combine different file types in the same workflow
Explore the various Tools actions →
Configuring the model
Choosing the right model is crucial for your workflow's success. Each model in Lleverage comes with detailed information about its intelligence rating, speed, cost, context handling, and reliability. This information helps you select the most appropriate model for your specific needs.
The Advanced model settings give you precise control over your model's behavior. You can adjust the temperature to balance creativity with predictability, set token limits, and fine-tune other parameters that affect how the model generates responses. The output format can be configured to return plain text, structured JSON, or other specific data formats depending on your needs.
Understand model configuration →
Testing and output
Testing your Prompt action is simple. Use the Run button to test with current data and preview the results directly in the interface. You can monitor performance metrics like token usage, cost, and latency to ensure your prompt is efficient and effective.
The output format determines how the AI's response will be structured. Whether you need plain text for content generation or structured JSON for data processing, you can configure the output to match your requirements.
Last updated