Prompts
A Prompt in Lleverage is a concise set of instructions in natural language that instructs the AI model on the task(s) to perform and how to respond. This prompt is crucial in guiding the AI to understand and fulfil the specific functions of your Workflows. Effectively a prompt is the core mechanic in your workflow that drives the feature you're building.
You can create prompts in your workflow, or pre-build prompts in our Prompt Studio that you can utilise within a workflow. You have the ability to move back and forth between the Prompt Studio and your workflow for the most effective user experience.
Different types of generation are supported:
Chat for conversational flows.
Completions for text analysis, text generation or data processing.
Image generation for text to image purposes.
Audio generation for text to audio purposes.
To build effective prompts, these traits are important:
Be specific: Craft your Prompt with precision and clarity to ensure you receive the most relevant responses from the Models
Give clear context: Include sufficient context within your Prompt to help the model produce consistent and accurate responses.
Structure: Properly structuring your Prompt is essential for clear communication, enabling the AI model to understand and execute your requests effectively.
To further optimize your prompts within Lleverage you can make use of the following constructs:
Dynamic Variables: Incorporate dynamic variables into your Prompt to allow for real-time data insertion, which enhances the adaptability and relevance of AI responses in varying scenarios.
Markdown: Utilise Markdown, a lightweight markup language, to format your Prompt. This enhances readability and effectiveness. Refer to our Markdown syntax guide for more details on how to structure your prompts effectively.
Parameters, Traits, and Constraints: Clearly state any parameters, traits, and constraints in your Prompt. For example, you might specify the response length, stylistic preferences, or limitations on the AI’s response generation. These specifications help steer the AI model towards the desired output.
Model Configs allow you to select your model provider and tweak settings to guide the generative model's behaviour, including aspects like max tokens, temperature, retries, timeouts and more. Test out multiple variations of models and compare them to find right config for your specific features.
Last updated