Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

RAG Has Been Oversimplified - Exploring Complexities in Retrieval Augmented Generation

MLOps.community via YouTube

Overview

Explore the complexities of Retrieval Augmented Generation (RAG) in this 49-minute MLOps podcast episode featuring Yujian Tang, Developer Advocate at Zilliz. Delve into the nuanced challenges developers face when implementing RAG, moving beyond industry oversimplifications. Learn about embedding vector databases, the consensus on large and small language models, and the intricacies of QA bots. Discover critical components of the RAG stack, including citation building, context vs. relevance, and similarity search. Examine RAG optimization techniques, discuss scenarios where RAG may not be suitable, and explore multimodal RAG applications. Gain insights into fashion app development and video citation methods while understanding the trade-offs in LLM interactions.

Syllabus

[] Yujian's preferred coffee
[] Takeaways
[] Please like, share, and subscribe to our MLOps channels!
[] The hero of the LLM space
[] Embeddings into Vector databases
[] What is large and what is small LLM consensus
[] QA Bot behind the scenes
[] Fun fact getting more context
[] RAGs eliminate the ability of LLMs to hallucinate
[] Critical part of the rag stack
[] Building citations
[] Difference between context and relevance
[] Missing prompt tooling
[] Similarity search
[] RAG Optimization
[] Interacting with LLMs and tradeoffs
[] RAGs not suited for
[] Fashion App
[] Multimodel Rags vs LLM RAGs
[] Multimodel use cases
[] Video citations
[] Wrap up

Taught by

MLOps.community

Reviews

Start your review of RAG Has Been Oversimplified - Exploring Complexities in Retrieval Augmented Generation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.