What is Lleverage?

Welcome to the Lleverage Docs. This site includes an introduction to Lleverage concepts, resources for using the API and SDK and best practices for building and deploying AI workflows.

What is Lleverage?

Building AI features and making them production ready is hard. Lleverage aims to simplify this process. Lleverage is a low-code AI feature development and observability platform for product development teams. It drives collaboration between product and engineering teams to build comprehensive AI functionality with confidence. Lleverage democratises the process of creating, testing, and deploying advanced AI features.

It enables the construction of simple to comprehensive AI workflows that seamlessly integrate with your application. Additionally, it aids in optimising cost, quality, and response times for your model interactions, ensuring the selection of the most suitable providers and configurations for your feature.

Lleverage is simple and straightforward, but it's good to be aware of the following constructs so you can lleverage the full feature set with ease.

Key Concepts

  • Organisations are the top level containers that house all your Lleverage content and team members.

  • Workflows allow you to build and collaborate on powerful AI workflows that chain business logic, data, APIs and prompts to create full fledged AI functionality.

  • Prompts are used to construct and optimise calls to generative models.

  • Connections give you access to data and/or third parties to build AI workflows. There are four types of connections:

    • Model providers to call generative services like LLMs, text-to-image or text-to-speech models.

    • Databases to fetch and store data from your own relational, graph or vector databases.

    • APIs for fetching and sending back data from your own or third party APIs.

    • Integrations to authorise access to third party platforms to unlock bespoke building blocks for your AI workflows.

  • Insights allow you to track how your workflows and prompts run in production and to explore ways to optimise on cost, latency or quality.


Last updated