Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Understanding Generalization from Pre-training Loss to Downstream Tasks

Simons Institute via YouTube

Overview

Explore the mysteries behind pre-trained models and their generalization capabilities in this lecture by Tengyu Ma from Stanford University. Delve into the role of pre-training losses in extracting meaningful structural information from unlabeled data, with a focus on the infinite data regime. Examine how contrastive loss creates embeddings that capture manifold distance between raw data and graph distance of positive-pair graphs. Investigate the relationship between embedding space directions and cluster relationships in positive-pair graphs. Discover recent advancements that incorporate architectural inductive bias and demonstrate the implicit bias of optimizers in pre-training. Gain insights into the theoretical frameworks and empirical evidence supporting these concepts, shedding light on the behavior of practical pre-trained models in AI and machine learning.

Syllabus

Understanding Generalization from Pre-training Loss to Downstream Tasks

Taught by

Simons Institute

Reviews

Start your review of Understanding Generalization from Pre-training Loss to Downstream Tasks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.