Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

QLoRA - How to Fine-tune an LLM on a Single GPU with Python Code

Shaw Talebi via YouTube

Overview

Learn how to fine-tune a large language model (LLM) using QLoRA (Quantized Low-rank Adaptation) on a single GPU in this comprehensive 37-minute video tutorial. Explore the four key ingredients of QLoRA: 4-bit NormalFloat, Double Quantization, Paged Optimizer, and LoRA. Follow along with example Python code to train a custom YouTube comment responder using Mistral-7b-Instruct. Gain insights into quantization techniques, computational efficiency, and practical implementation. Access additional resources including a series playlist, related videos, blog post, Colab notebook, GitHub repository, and Hugging Face model and dataset links for further learning and experimentation.

Syllabus

Intro -
Fine-tuning recap -
LLMs are computationally expensive -
What is Quantization? -
4 Ingredients of QLoRA -
Ingredient 1: 4-bit NormalFloat -
Ingredient 2: Double Quantization -
Ingredient 3: Paged Optimizer -
Ingredient 4: LoRA -
Bringing it all together -
Example code: Fine-tuning Mistral-7b-Instruct for YT Comments -
What's Next? -

Taught by

Shaw Talebi

Reviews

Start your review of QLoRA - How to Fine-tune an LLM on a Single GPU with Python Code

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.