Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Massachusetts Institute of Technology

Tuning GPT-3 on a Single GPU via Zero-Shot Hyperparameter Transfer

Massachusetts Institute of Technology via YouTube

Overview

Explore the groundbreaking technique of tuning GPT-3 hyperparameters on a single GPU through zero-shot hyperparameter transfer in this MIT seminar. Delve into the maximal update parametrization (µP) concept, which allows narrow and wide neural networks to share optimal hyperparameters. Learn how this method enabled tuning of the 6.7 billion parameter GPT-3 version using only 7% of its pretraining compute budget. Discover the theoretical foundations behind µP's unique properties and its connection to infinite-width neural networks and Tensor Programs theory. Gain insights from Greg Yang, a Microsoft Research scientist with a distinguished academic background, as he presents findings based on his research paper. Suitable for both general machine learning practitioners and those interested in theoretical aspects of neural networks.

Syllabus

Introduction
Material
Underlying Technology
Primary Stability
Other Parameters
Methodology
Training Curves
Summary
Intensive vs Extensive Properties
Extensive vs Intensive Properties
The Plan
Example
General Tuning
Experimental Results
BIRD
Evaluation Results
Vertical Foundation
Primarization
Theory of Everything

Taught by

MIT Embodied Intelligence

Reviews

5.0 rating, based on 1 Class Central review

Start your review of Tuning GPT-3 on a Single GPU via Zero-Shot Hyperparameter Transfer

  • Profile image for Amrender Singh
    Amrender Singh
    Course is excellent. It give extensive learning. Gpt learning is new and it show how GPT is working.

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.