Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Decoder Flow in Transformer Model

CodeEmporium via YouTube

Overview

Dive into a comprehensive 40-minute video tutorial that breaks down the process of coding a Transformer Decoder from scratch. Learn about the key components of the Transformer architecture, including parameter setup, input/output handling, masking techniques, and the intricacies of the decoder's forward pass. Explore essential concepts such as Masked Multi-Head Self-Attention, Layer Normalization, Multi-Head Cross Attention, and Feed Forward networks. Follow along as the tutorial guides you through instantiating the decoder, implementing decoder layers, and ultimately completing the entire decoder flow. Gain valuable insights into the inner workings of this powerful neural network architecture, perfect for those looking to deepen their understanding of natural language processing and machine learning.

Syllabus

Introduction
Parameters of Transformer
Inputs and Outputs of Transformer
Masking
Instantiating Decoder
Decoder Forward Pass
Decoder Layer
Masked Multi Head Self Attention
Dropout + Layer Normalization
Multi Head Cross Attention
Feed Forward, Activation
Completing the decoder flow

Taught by

CodeEmporium

Reviews

Start your review of Decoder Flow in Transformer Model

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.