Overview
Explore the world of multimodal search and retrieval augmented generation (RAG) using vector databases in this 52-minute session from LLMOps Space. Dive into the integration of open-source multimodal embeddings with large generative multimodal models for cross-modal search and MM-RAG. Discover how multimodal embedding models combine various data forms like images, text, audio, and sensory information for advanced data analysis. Learn techniques for searching across different data modalities and leveraging generative models for large-scale data retrieval and generation. Understand how real-time cross-modal retrieval enables LLMs to reason over enterprise-level multimodal data, enhancing decision-making and insights. Gain valuable knowledge from Zain of Weaviate in this informative session, part of the LLMOps Space global community focused on deploying LLMs into production.
Syllabus
Building Multi-Modal Search and RAG with Vector Databases | LLMOps
Taught by
LLMOps Space