Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning Gemma Models with PEFT and LoRA Using HuggingFace Datasets

AI Bites via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn to fine-tune Google's Gemma 2B parameter model using Parameter Efficient Fine-tuning (PEFT) and LoRA in this 24-minute code walkthrough video. Master the process of running inference on pre-trained Gemma-2b, understand the motivation behind PEFT, and create a custom dataset derived from Databricks-dolly-15k for fine-tuning purposes. Explore essential tips for parameter optimization, supervised fine-tuning techniques, and interpret training visualizations through practical demonstrations in the HuggingFace ecosystem. Follow along with the provided Colab notebook to implement the concepts, from initial setup to final model evaluation. Benefit from comprehensive explanations of each step, including preliminaries, installation procedures, and detailed parameter configurations for optimal model performance.

Syllabus

- Intro
- Preliminaries & Installation
- Run inference on pre-trained Gemma-2b
- Motivation for Parameter Efficient Fine-tuning PEFT
- Create a custom dataset for finetuning
- Tips for fine-tuning parameters
- Supervised Fine-tuning
- Training visualization & Interpretations
- Conclusion

Taught by

AI Bites

Reviews

Start your review of Fine-tuning Gemma Models with PEFT and LoRA Using HuggingFace Datasets

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.