Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Provably Learning a Multi-Head Attention Layer

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Explore a 50-minute lecture on provably learning multi-head attention layers presented by Sitan Chen from Harvard University at IPAM's EnCORE Workshop. Delve into the computational perspective of transformer learnability, examining the gap between empirical success and theoretical understanding. Discover the first nontrivial provable algorithms and computational lower bounds for next-token prediction in a realizable setting. Learn about a novel algorithm that uses examples to sculpt a convex body containing unknown parameters, contrasting it with traditional approaches for learning multi-layer perceptrons. Gain insights into the challenges of proving efficient learning for transformer models trained with SGD, and understand the significance of this research in advancing our theoretical grasp of attention mechanisms in machine learning.

Syllabus

Sitan Chen - Provably learning a multi-head attention layer - IPAM at UCLA

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of Provably Learning a Multi-Head Attention Layer

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.