Retrieval Augmented Generation (RAG) - Building Powerful LLM Pipelines
Neural Breakdown with AVB via YouTube
Overview
Learn to build powerful Large Language Model (LLM) pipelines through a comprehensive breakdown of Retrieval Augmented Generation (RAG) in this 17-minute technical video. Explore how RAG enhances LLM responses by incorporating external knowledge bases, starting with basic pipeline fundamentals and progressing to advanced implementations. Master key concepts including contextual chunking, data conversion techniques using Language Model Embeddings and TF-IDF/BM-25, vector and graph database implementation, query rewriting strategies including HYDE, and post-retrieval optimization through Reciprocal Rank Fusion. Access additional resources covering vector databases, metadata filtering, contextual retrieval, and dense retrieval techniques through provided academic papers and reference materials.
Syllabus
- Intro
- Retrieval Augmented Generation Blueprint
- Chunking and Contextual Chunking
- Data Conversion - Language Model Embeddings
- Data Conversion - TF-IDF and BM-25
- Vector and Graph Databases
- Query Rewriting
- Contextual Query Rewriting, HYDE
- Post Retrieval
- Reciprocal Rank Fusion
- Outro
Taught by
Neural Breakdown with AVB