Overview
Learn the fundamentals of Retrieval Augmented Generation (RAG) in this 24-minute technical tutorial that explores how to enhance Large Language Models with external knowledge sources. Discover the limitations of parametric knowledge, understand in-context learning, and master semantic search concepts using vector databases. Follow along with a practical lab demonstration that builds a RAG system for the Huberman Lab Podcast using Langchain, GPT-3.5, BGE embeddings, and ChromaDB. Access comprehensive resources including downloadable mindmaps, lab notebooks, and step-by-step guidance on processing podcast datasets, implementing document embeddings, and constructing full RAG chains with Langchain Expression Language. Gain hands-on experience with keyword and semantic search techniques while learning best practices for source attribution in RAG systems.
Syllabus
- Introduction to RAG
- Parametric Knowledge Limitations
- In-Context Learning
- Retrieval Augmented Generation RAG
- Keyword Search
- Semantic Search / Sentence Similarity
- Hands-on RAG Example with Langchain
- Processing Huberman Lab Podcast Dataset
- Embedding Documents with BGE Embeddings
- Populating Chroma Vector Store
- Full RAG Chain with Langchain Expression Language LEL
- Quoting Sources
Taught by
Donato Capitella