Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Emerging Properties in Self-Supervised Vision Transformers - Paper Explained

Aleksa Gordić - The AI Epiphany via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive video analysis of the "Emerging Properties in Self-Supervised Vision Transformers" paper, focusing on DINO (self DIstillation with NO labels) introduced by Facebook AI. Delve into the concept of using self-supervised learning for vision transformers and discover emerging properties such as predicting segmentation masks and high-quality features for k-NN classification. Follow a detailed walkthrough of DINO's main ideas, attention maps, pseudocode, multi-crop technique, teacher network details, results, ablations, and feature visualizations. Gain insights into how self-supervised learning in computer vision can potentially match the success seen in natural language processing tasks.

Syllabus

DINO main ideas, attention maps explained
DINO explained in depth
Pseudocode walk-through
Multi-crop and local-to-global correspondence
More details on the teacher network
Results
Ablations
Collapse analysis
Features visualized and outro

Taught by

Aleksa Gordić - The AI Epiphany

Reviews

Start your review of Emerging Properties in Self-Supervised Vision Transformers - Paper Explained

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.