Vector Stores

Vector Stores are a critical component of workflows in Lleverage, enabling the storage and retrieval of vector embeddings for efficient AI-driven search and retrieval tasks. Each client in Lleverage automatically receives their own Postgres Vector Database, ensuring secure and isolated vector management.

We also offer the ability to bring your own vector store (Weaviate, Pinecone and Postgres Vector). Please contact sales@Lleverage.ai for this.

Key Features

  • Automatic Vectorization: When you upload documents—such as PDFs, CSVs, Word documents, or PowerPoint presentations—Lleverage automatically creates vector embeddings for these files using OpenAI’s best-in-class embeddings model. This allows for powerful similarity search and retrieval based on the content of your documents.

  • Supported Formats: You can upload various document types, including:

    • PDFs

    • CSVs

    • Word documents (DOCX)

    • PowerPoint presentations (PPTX)

  • Preprocessing with Unstructured.io: Before storing vectors, Lleverage leverages the capabilities of Unstructured.io, a best-in-class platform for document preprocessing. This ensures that documents are properly cleaned, chunked, and transformed into a format suitable for creating embeddings. Additionally, Unstructured.io generates summaries, making your document content ready for retrieval-augmented generation (RAG) workflows.

How Vector Stores Work

  1. Document Upload: You can upload your documents (e.g., PDFs, CSVs, DOCX, PPTX) to your project’s vector store.

  2. Preprocessing and Chunking: Using Unstructured.io, documents are automatically cleaned, transformed, and split into smaller, manageable chunks to optimize the embedding process.

  3. Embedding Generation: Lleverage uses OpenAI's default/best embeddings model to create vector representations of your document chunks. These embeddings capture the semantic meaning of the text, enabling advanced similarity searches.

  4. Storage: The resulting vectors are stored in your personal Postgres Vector Database, where they can be queried for similarity search or used in workflows to retrieve relevant documents based on input queries.

Use Cases for Vector Stores

  • Semantic Search: Query your vector store using text-based input to find documents or document segments that are semantically similar to the query.

  • Retrieval-Augmented Generation (RAG): Use your vector store to retrieve relevant information during a workflow execution and integrate it into responses from large language models (LLMs).

  • Data Organization: Automatically structure large amounts of unstructured data by transforming them into searchable, vectorized content, making it easier to retrieve insights from vast document collections.

Integration in Workflows

Vector Stores integrate seamlessly into your Lleverage workflows. By connecting the Vector Store Similarity Search node, you can query your vectorized documents based on input text and retrieve relevant results, which can then be passed into downstream nodes such as LLMs or prompts.

Last updated