Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn to fine-tune Google's Gemma 2B parameter model using Parameter Efficient Fine-tuning (PEFT) and LoRA in this 24-minute code walkthrough video. Master the process of running inference on pre-trained Gemma-2b, understand the motivation behind PEFT, and create a custom dataset derived from Databricks-dolly-15k for fine-tuning purposes. Explore essential tips for parameter optimization, supervised fine-tuning techniques, and interpret training visualizations through practical demonstrations in the HuggingFace ecosystem. Follow along with the provided Colab notebook to implement the concepts, from initial setup to final model evaluation. Benefit from comprehensive explanations of each step, including preliminaries, installation procedures, and detailed parameter configurations for optimal model performance.
Syllabus
- Intro
- Preliminaries & Installation
- Run inference on pre-trained Gemma-2b
- Motivation for Parameter Efficient Fine-tuning PEFT
- Create a custom dataset for finetuning
- Tips for fine-tuning parameters
- Supervised Fine-tuning
- Training visualization & Interpretations
- Conclusion
Taught by
AI Bites