Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Infinite Limits and Scaling Laws of Neural Networks

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the cutting-edge research on neural network scaling laws and infinite parameter limits in this one-hour lecture by Blake Bordelon from Harvard University. Delivered at IPAM's Theory and Practice of Deep Learning Workshop, the talk delves into the breakthroughs in computer vision and natural language processing enabled by scaling up deep learning models. Examine infinite parameter limits of deep neural networks that preserve representation learning and understand the convergence rate of finite models to these limits. Discover how dynamical mean field theory methods provide an asymptotic description of learning dynamics in infinite width and depth networks. Investigate the proximity of finite network training dynamics to idealized limits through empirical analysis. Gain insights into a theoretical model of neural scaling laws that describes generalization dependence on training time, model size, and data quantity. Learn about compute-optimal scaling strategies, spectral properties of limiting kernels, and how representation learning can improve neural scaling laws. Understand the potential for doubling the training-time exponent in very hard tasks compared to the static kernel limit.

Syllabus

Blake Bordelon - Infinite limits and scaling laws of neural networks - IPAM at UCLA

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of Infinite Limits and Scaling Laws of Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.