Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

LoRA: Low-Rank Adaptation for Parameter-Efficient Large Language Model Fine-Tuning

AI Bites via YouTube

Overview

Learn about Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning approach for large language models, in this 11-minute technical video. Explore the fundamentals of LoRA from rank decomposition to its practical implementation in transformer models. Discover why LoRA has become a popular choice for budget-friendly transformer model fine-tuning, understand its training and inference processes, and learn how to select appropriate rank parameters. Gain insights from the original research paper and access practical implementations through various frameworks like HuggingFace's PEFT library. Master the technical concepts behind this efficient adaptation technique that's revolutionizing the way we fine-tune large language models.

Syllabus

- Intro
- Adapters
- Twitter https://twitter.com/ai_bites
- What is LoRA
- Rank Decomposition
- Motivation Paper
- LoRA Training
- LoRA Inference
- LoRA in Transformers
- Choosing the rank
- Implementations

Taught by

AI Bites

Reviews

Start your review of LoRA: Low-Rank Adaptation for Parameter-Efficient Large Language Model Fine-Tuning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.