Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the process of fine-tuning Large Language Models (LLMs) using Parameter-Efficient Fine-Tuning (PEFT) and Low-Rank Adaptation (LoRA) in this informative video. Learn about the challenges of traditional fine-tuning methods and discover how PEFT addresses these issues. Delve into the LoRA technique, examining its diagram and understanding its implementation. Get acquainted with the Hugging Face PEFT Library and follow along with a detailed code walkthrough. Gain practical insights on how to fine-tune decoder-style GPT models efficiently and upload the results to the HuggingFace Hub. Access additional resources, including a LoRA Colab notebook and relevant blog posts, to further enhance your understanding of these advanced fine-tuning techniques.
Syllabus
Intro
- Problems with fine-tuning
- Introducing PEFT
- PEFT other cool techniques
- LoRA Diagram
- Hugging Face PEFT Library
- Code Walkthrough
Taught by
Sam Witteveen