Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Optimizing LLM Fine-Tuning with PEFT and LoRA Adapter-Tuning for GPU Performance

Discover AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn to optimize large language model fine-tuning for minimal GPU memory usage through a comprehensive 35-minute code tutorial demonstrating Parameter-Efficient Fine-tuning (PEFT) techniques. Master the implementation of Low-rank adaptation (LoRA) for adapter-tuning with INT8 quantized models using PyTorch 2.0, specifically designed for consumer GPUs with less than 80GB memory. Follow along with practical code demonstrations covering PEFT source code, Llama-LoRA fine-tuning, model creation, configuration, and training parameters. Explore advanced concepts including INT8 quantization, bfloat16 optimization, XLA compiler integration, and weight tensor freezing. Gain hands-on experience with adapter-tuning implementation, saving PEFT-LoRA weights, and running inference on newly tuned models, complete with access to a Jupyter notebook and supplementary HuggingFace blog resources.

Syllabus

PEFT source code LoRA, pre-fix tuning,..
Llama - LoRA fine-tuning code
Create PEFT - LoRA Model Seq2Seq
Trainable parameters of PEFT - LoRA model
get_peft_model
PEFT - LoRA - 8bit model of OPT 6.7B LLM
load_in_8bit
INT8 Quantization explained
Fine-tune a quantized model
bfloat16 and XLA compiler PyTorch 2.0
Freeze all pre-trained layer weight tensors
Adapter-tuning of PEFT - LoRA model
Save tuned PEFT - LoRA Adapter weights
Run inference of new PEFT - LoRA adapter - tuned LLM
Load your Adapter-tuned PEFT - LoRA model

Taught by

Discover AI

Reviews

Start your review of Optimizing LLM Fine-Tuning with PEFT and LoRA Adapter-Tuning for GPU Performance

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.