Completed
Example code: Fine-tuning Mistral-7b-Instruct for YT Comments -
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
QLoRA - How to Fine-tune an LLM on a Single GPU with Python Code
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro -
- 2 Fine-tuning recap -
- 3 LLMs are computationally expensive -
- 4 What is Quantization? -
- 5 4 Ingredients of QLoRA -
- 6 Ingredient 1: 4-bit NormalFloat -
- 7 Ingredient 2: Double Quantization -
- 8 Ingredient 3: Paged Optimizer -
- 9 Ingredient 4: LoRA -
- 10 Bringing it all together -
- 11 Example code: Fine-tuning Mistral-7b-Instruct for YT Comments -
- 12 What's Next? -