Overview
Dive into a comprehensive lecture on stochastic gradient descent and backpropagation, exploring parameterized models, loss functions, and gradient-based methods. Learn how to implement neural networks in PyTorch and understand the generalized form of backpropagation. Examine concrete examples, discuss Jacobian matrix dimensions, and explore various neural net modules while computing their gradients. Gain insights into softmax, logsoftmax, and practical tricks for backpropagation to enhance your understanding of deep learning concepts and techniques.
Syllabus
– Week 2 – Lecture
– Gradient Descent Optimization Algorithm
– Advantages of SGD, Backpropagation for Traditional Neural Net
– PyTorch implementation of Neural Network and a Generalized Backprop Algorithm
– Basic Modules - LogSoftMax
– Practical Tricks for Backpropagation
– Computing gradients for NN modules and Practical tricks for Back Propagation
Taught by
Alfredo Canziani