Overview
Explore the intricacies of the Transformer Decoder architecture in this 26-minute video tutorial. Delve into key concepts such as text processing, data batching, position encoding, and the creation of query, key, and value tensors. Learn about masked multi-head self-attention, residual connections, and multi-head cross-attention. Understand how to complete the decoder layer, train the Transformer model, and perform inference. Gain valuable insights into the inner workings of this powerful neural network architecture used in natural language processing tasks.
Syllabus
Introduction
What is the Encoder doing?
Text Processing
Why are we batching data?
Position Encoding
Query, Key and Value Tensors
Masked Multi Head Self Attention
Residual Connections
Multi Head Cross Attention
Finishing up the Decoder Layer
Training the Transformer
Inference for the Transformer
Taught by
CodeEmporium