Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning LLama 2 with PEFT, LoRA, 4-bit Quantization, TRL and SFT

Discover AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to fine-tune the LLama 2 model in this 15-minute technical tutorial that demonstrates parameter efficient fine-tuning techniques, including low rank approximation of matrix and tensor structures, 4-bit quantization of tensors, transformer-based Reinforcement Learning (RL), and HuggingFace's Supervised Fine-tuning trainer. Create synthetic datasets using GPT-4 or CLAUDE 2 as the central intelligence to generate task-specific training data for fine-tuning Large Language Models based on user queries. Follow along with code examples based on Matt Shumer's Jupyter Notebook implementation for customizing and optimizing LLama 2's performance.

Syllabus

Fine-tune LLama2 w/ PEFT, LoRA, 4bit, TRL, SFT code #llama2

Taught by

Discover AI

Reviews

Start your review of Fine-tuning LLama 2 with PEFT, LoRA, 4-bit Quantization, TRL and SFT

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.