Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Parameter Efficient Fine-Tuning with Multiple LoRA Adapters for Large Language Models

Discover AI via YouTube

Overview

Dive deep into Parameter Efficient Fine-Tuning (PEFT) with multiple LoRA adapters in this comprehensive technical video. Explore the intricacies of Low Rank Adaptation (LoRA) and master its various configurations, including all 16 LoRA_config parameters essential for efficient model fine-tuning. Learn to manipulate multiple PEFT adapters by switching between them, activating or deactivating them on pre-trained Large Language Models (LLMs) or Vision Language Models (VLMs). Understand the fundamental concepts of matrix factorization and Singular Value Decomposition (SVD) while discovering how to combine multiple PEFT-LoRA adapters into a single unified adapter for enhanced model performance.

Syllabus

PEFT w/ Multi LoRA explained (LLM fine-tuning)

Taught by

Discover AI

Reviews

Start your review of Parameter Efficient Fine-Tuning with Multiple LoRA Adapters for Large Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.