Full Fine-Tuning vs LoRA and QLoRA - Comparison and Best Practices

Full Fine-Tuning vs LoRA and QLoRA - Comparison and Best Practices

Trelis Research via YouTube Direct link

Comparing VRAM, Training Time + Quality

3 of 14

3 of 14

Comparing VRAM, Training Time + Quality

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Full Fine-Tuning vs LoRA and QLoRA - Comparison and Best Practices

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Comparing full fine-tuning and LoRA fine tuning
  2. 2 Video Overview
  3. 3 Comparing VRAM, Training Time + Quality
  4. 4 How full fine-tuning works
  5. 5 How LoRA works
  6. 6 How QLoRA works
  7. 7 How to choose learning rate, rank and alpha
  8. 8 Choosing hyper parameters for Mistral 7B fine-tuning
  9. 9 Specific tips for QLoRA, regularization and adapter merging.
  10. 10 Tips for using Unsloth
  11. 11 LoftQ - LoRA aware quantisation
  12. 12 Step by step TinyLlama QLoRA
  13. 13 Mistral 7B Fine-tuning Results Comparison
  14. 14 Wrap up

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.