EfficientNetV2 - Smaller Models and Faster Training - Paper Explained
Aleksa Gordić - The AI Epiphany via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive video explanation of the EfficientNetV2 paper, which introduces smaller models and faster training techniques for image classification. Learn about progressive training, the Fused-MBConv layer, and a novel reward function for Neural Architecture Search (NAS). Dive deep into the paper's key concepts, including a high-level overview, NAS review, novel reward function, progressive training, stochastic depth regularization, and results. Gain insights into how EfficientNetV2 achieves better performance on ImageNet top-1 accuracy compared to recent models like NFNets and Vision Transformers.
Syllabus
High-level overview
NAS review
Deep dive
Novel reward
Progressive training
Stochastic depth regularization
Results
Taught by
Aleksa Gordić - The AI Epiphany