Overview
Syllabus
intro
starter code
fixing the initial loss
fixing the saturated tanh
calculating the init scale: “Kaiming init”
batch normalization
batch normalization: summary
real example: resnet50 walkthrough
summary of the lecture
just kidding: part2: PyTorch-ifying the code
viz #1: forward pass activations statistics
viz #2: backward pass gradient statistics
the fully linear case of no non-linearities
viz #3: parameter activation and gradient statistics
viz #4: update:data ratio over time
bringing back batchnorm, looking at the visualizations
summary of the lecture for real this time
Taught by
Andrej Karpathy