Tuning GPT-3 on a Single GPU via Zero-Shot Hyperparameter Transfer
Massachusetts Institute of Technology via YouTube
-
17
-
- Write review
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the groundbreaking technique of tuning GPT-3 hyperparameters on a single GPU through zero-shot hyperparameter transfer in this MIT seminar. Delve into the maximal update parametrization (µP) concept, which allows narrow and wide neural networks to share optimal hyperparameters. Learn how this method enabled tuning of the 6.7 billion parameter GPT-3 version using only 7% of its pretraining compute budget. Discover the theoretical foundations behind µP's unique properties and its connection to infinite-width neural networks and Tensor Programs theory. Gain insights from Greg Yang, a Microsoft Research scientist with a distinguished academic background, as he presents findings based on his research paper. Suitable for both general machine learning practitioners and those interested in theoretical aspects of neural networks.
Syllabus
Introduction
Material
Underlying Technology
Primary Stability
Other Parameters
Methodology
Training Curves
Summary
Intensive vs Extensive Properties
Extensive vs Intensive Properties
The Plan
Example
General Tuning
Experimental Results
BIRD
Evaluation Results
Vertical Foundation
Primarization
Theory of Everything
Taught by
MIT Embodied Intelligence