Overview
Explore the development of a custom RAG pipeline using a fine-tuned, 13B parameter open-source model that mimics Yoda's speech style in this comprehensive tutorial video. Discover valuable engineering tips for deploying fine-tuned models within RAG pipelines and learn efficient model fine-tuning techniques using Gradient's platform. Gain insights into the technical overview, open-source model selection, Gradient workspace setup, fine-tuning process, LoRA explanation, hyper-parameter optimization, and testing of both the fine-tuned model and RAG pipeline. Access complementary resources including a blog post, GitHub repository, and additional learning materials to deepen your understanding of AI, Data Science, and Large Language Models.
Syllabus
Intro:
Gradient Intro:
Technical Overview:
Open-Source Model:
Gradient Workspace:
Fine-tuning:
Brief Explanation of LoRA:
Hyper-parameters:
Testing fine-tuned model:
Testing RAG pipeline:
Outro:
Taught by
Data Centric