Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Chunking Strategies for Retrieval Augmented Generation (RAG) with LangChain and LlamaIndex

AI Bites via YouTube

Overview

Learn about text chunking techniques for Retrieval Augmented Generation (RAG) in this 17-minute technical video that demonstrates hands-on implementation using LangChain and LlamaIndex. Explore different chunking methods including fixed-size, recursive, document/code, and semantic chunking to optimize the ingestion of text into Vector Databases for RAG pipelines. Starting with a RAG refresher, dive into the importance of proper text segmentation for improving retrieval accuracy and reducing hallucinations in Large Language Models. Follow along with practical examples and understand when to use each chunking strategy for different types of content, from standard documents to programming code. Master essential concepts for building more effective RAG applications while learning from an experienced Machine Learning researcher with 15 years of software engineering background.

Syllabus

- Intro
- RAG refresher
- Ingestion in RAG
- What is Chunking?
- Why Chunking?
- Fixed-Size Chunking
- Recursive Chunking
- Document / Code Chunking
- Semantic Chunking
- Conclusion

Taught by

AI Bites

Reviews

Start your review of Chunking Strategies for Retrieval Augmented Generation (RAG) with LangChain and LlamaIndex

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.