Completed
Choosing hyper parameters for Mistral 7B fine-tuning
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Full Fine-Tuning vs LoRA and QLoRA - Comparison and Best Practices
Automatically move to the next video in the Classroom when playback concludes
- 1 Comparing full fine-tuning and LoRA fine tuning
- 2 Video Overview
- 3 Comparing VRAM, Training Time + Quality
- 4 How full fine-tuning works
- 5 How LoRA works
- 6 How QLoRA works
- 7 How to choose learning rate, rank and alpha
- 8 Choosing hyper parameters for Mistral 7B fine-tuning
- 9 Specific tips for QLoRA, regularization and adapter merging.
- 10 Tips for using Unsloth
- 11 LoftQ - LoRA aware quantisation
- 12 Step by step TinyLlama QLoRA
- 13 Mistral 7B Fine-tuning Results Comparison
- 14 Wrap up