Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

Computer Vision Architecture Evolution: ConvNets to Transformers - Lecture 21

University of Central Florida via YouTube

Overview

Explore the evolution of vision architectures and modernization of convolutional neural networks in this 33-minute lecture from the University of Central Florida. Delve into the hierarchy of SWIN versus CNNs, macro design changes in ResNet, and improvements like inverted bottlenecks and larger kernel sizes. Examine micro designs, including activation function replacements and normalization layer adjustments. Learn about the final ConvNext block, network evaluation techniques, and compare machine performance across different architectures.

Syllabus

Introduction
Evolution of Vision Architectures
Hierarchy of SWIN vs. CNNs
Modernizing ConvNets
Modernizing ResNet
Macro Design Changes
Changing stage compute ratio
Changing stem to "Patch-ify"
Depthwise Conv. vs Self-Attention
Improvements
Inverted Bottleneck
Larger Kernel Sizes
Micro Designs (mD)
Replace RELU with GELU
Fewer Activation functions
Fewer Normalization Layers
Substituting BN with LN
Visualization
mD4- Improvement
Separate Downsampling Layer
Final ConvNext block
Networks for Evaluation
Training Settings
Machine Performance Comparison

Taught by

UCF CRCV

Reviews

Start your review of Computer Vision Architecture Evolution: ConvNets to Transformers - Lecture 21

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.