Overview
Syllabus
intro
micrograd overview
derivative of a simple function with one input
derivative of a function with multiple inputs
starting the core Value object of micrograd and its visualization
manual backpropagation example #1: simple expression
preview of a single optimization step
manual backpropagation example #2: a neuron
implementing the backward function for each operation
implementing the backward function for a whole expression graph
fixing a backprop bug when one node is used multiple times
breaking up a tanh, exercising with more operations
doing the same thing but in PyTorch: comparison
building out a neural net library multi-layer perceptron in micrograd
creating a tiny dataset, writing the loss function
collecting all of the parameters of the neural net
doing gradient descent optimization manually, training the network
summary of what we learned, how to go towards modern neural nets
walkthrough of the full code of micrograd on github
real stuff: diving into PyTorch, finding their backward pass for tanh
conclusion
outtakes :
Taught by
Andrej Karpathy