Large Language Models - Will They Keep Getting Bigger?
Massachusetts Institute of Technology via YouTube
Overview
Syllabus
Introduction
What are language models
Modern NLP
Scaling
sparse models
Gshard
Base Layers
Formal Optimization
Algorithmic Optimization
Experiments
Comparison
Benefits
Dmxlayers
Representations
Simple routing
Training time
Parallel training
Data curation
Unrealistic setting
Domain structure
Inference procedure
Perplexity numbers
Modularity
Remove experts
Summary
Generic language models
Hot dog example
Hot pan example
Common sense example
Large language models
The fundamental challenge
Surface form competition
Flip the reasoning
Key intuition
Noisey channel models
Finetuning
Scoring Strings
Web Crawls
Example Output
Structure Data
Efficiency
Questions
Density estimation
Better training objectives
Optimization
Probability
Induction
multimodality
outliers
compute vs data
Taught by
MIT Embodied Intelligence