Overview
Dive into a comprehensive conference talk on training large language models presented by Sam Smith of Google DeepMind at IPAM's Theory and Practice of Deep Learning Workshop. Explore key practical concepts behind LLM training, including a brief introduction to Transformers and the dominance of MLPs in computation. Gain insights into computational bottlenecks on TPUs and GPUs, techniques for training models too large for single-device memory, scaling laws, and hyper-parameter tuning. Delve into a detailed discussion of LLM inference and, time permitting, discover the design of recurrent models competitive with transformers, along with their advantages and drawbacks. Drawing from experiences with Griffin and RecurrentGemma, this 53-minute presentation offers valuable knowledge for those interested in the intricacies of LLM development and scaling.
Syllabus
Sam Smith - How to train an LLM - IPAM at UCLA
Taught by
Institute for Pure & Applied Mathematics (IPAM)