Recurrent Neural Networks, Vanilla and Gated - LSTM

Recurrent Neural Networks, Vanilla and Gated - LSTM

Alfredo Canziani via YouTube Direct link

– Why do we need to go to a higher hidden dimension?

3 of 15

3 of 15

– Why do we need to go to a higher hidden dimension?

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Recurrent Neural Networks, Vanilla and Gated - LSTM

Automatically move to the next video in the Classroom when playback concludes

  1. 1 – Good morning
  2. 2 – How to summarise papers as @y0b1byte with Notion
  3. 3 – Why do we need to go to a higher hidden dimension?
  4. 4 – Today class: recurrent neural nets
  5. 5 – Vector to sequence vec2seq
  6. 6 – Sequence to vector seq2vec
  7. 7 – Sequence to vector to sequence seq2vec2seq
  8. 8 – Sequence to sequence seq2seq
  9. 9 – Training a recurrent network: back propagation through time
  10. 10 – Training example: language model
  11. 11 – Vanishing & exploding gradients and gating mechanism
  12. 12 – The Long Short-Term Memory LSTM
  13. 13 – Jupyter Notebook and PyTorch in action: sequence classification
  14. 14 – Inspecting the activation values
  15. 15 – Closing remarks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.