Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn about LoRA (Low-Rank Adaptation), a parameter-efficient fine-tuning technique for neural networks in this 29-minute educational video. Explore how low-rank matrices and adapters work together to reduce storage requirements and eliminate inference latency while maintaining model performance. Master the technical concepts through three comprehensive passes, starting with low-rank matrices, moving to adapters, and culminating in low-rank adapters. Engage with interactive quizzes after each section to reinforce understanding of these advanced machine learning concepts. Drawing from multiple research papers and practical implementations, discover how this innovative approach optimizes neural network training by decreasing the number of trainable parameters per task while preserving the existing network architecture's efficiency.
Syllabus
Introduction
Pass 1: Low Rank Matrices
Quiz 1
Pass 2: Adapters
Quiz 2
Pass 3: Low Rank Adapters
Quiz 3
Summary
Taught by
CodeEmporium