Completed
Llama - LoRA fine-tuning code
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Optimizing LLM Fine-Tuning with PEFT and LoRA Adapter-Tuning for GPU Performance
Automatically move to the next video in the Classroom when playback concludes
- 1 PEFT source code LoRA, pre-fix tuning,..
- 2 Llama - LoRA fine-tuning code
- 3 Create PEFT - LoRA Model Seq2Seq
- 4 Trainable parameters of PEFT - LoRA model
- 5 get_peft_model
- 6 PEFT - LoRA - 8bit model of OPT 6.7B LLM
- 7 load_in_8bit
- 8 INT8 Quantization explained
- 9 Fine-tune a quantized model
- 10 bfloat16 and XLA compiler PyTorch 2.0
- 11 Freeze all pre-trained layer weight tensors
- 12 Adapter-tuning of PEFT - LoRA model
- 13 Save tuned PEFT - LoRA Adapter weights
- 14 Run inference of new PEFT - LoRA adapter - tuned LLM
- 15 Load your Adapter-tuned PEFT - LoRA model