Completed
– Shared weights
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Parameter Sharing - Recurrent and Convolutional Nets
Automatically move to the next video in the Classroom when playback concludes
- 1 – Welcome to class
- 2 – Hypernetworks
- 3 – Shared weights
- 4 – Parameter sharing ⇒ adding the gradients
- 5 – Max and sum reductions
- 6 – Recurrent nets
- 7 – Unrolling in time
- 8 – Vanishing and exploding gradients
- 9 – Math on the whiteboard
- 10 – RNN tricks
- 11 – RNN for differential equations
- 12 – GRU
- 13 – What is a memory
- 14 – LSTM – Long Short-Term Memory net
- 15 – Multilayer LSTM
- 16 – Attention for sequence to sequence mapping
- 17 – Convolutional nets
- 18 – Detecting motifs in images
- 19 – Convolution definitions
- 20 – Backprop through convolutions
- 21 – Stride and skip: subsampling and convolution “à trous”
- 22 – Convolutional net architecture
- 23 – Multiple convolutions
- 24 – Vintage ConvNets
- 25 – How does the brain interpret images?
- 26 – Hubel & Wiesel's model of the visual cortex
- 27 – Invariance and equivariance of ConvNets
- 28 – In the next episode…
- 29 – Training time, iteration cycle, and historical remarks