Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the world of multimodal machine learning and cross-modal search in this conference talk from NDC London 2024. Dive into the challenges of real-world problems that involve multiple data modalities, including spoken language, gestures, and various sensor inputs. Learn how open-source multimodal models, such as ImageBind, can process and understand diverse data types like images, video, text, audio, and tactile information. Discover techniques for implementing cross-modal search at a billion-object scale using vector databases, enabling innovative applications like searching audio with images or videos with text. Through live code demos and large-scale dataset examples, gain insights into scaling multimodal embedding models for production use and adding natural search interfaces to your applications. Acquire practical knowledge on integrating cross-modal retrieval capabilities to enhance your software's functionality and user experience.
Syllabus
Using Vector Databases for Multimodal Embeddings and Search - Zain Hasan - NDC London 2024
Taught by
NDC Conferences