Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of attention mechanisms and the Transformer architecture in this comprehensive 1-hour 18-minute lecture by Alfredo Canziani. Delve into the concept of self-attention and its role in creating hidden layer representations of inputs. Discover the key-value store paradigm and learn how to represent queries, keys, and values as rotations of an input. Gain insights into the Transformer architecture through a detailed walkthrough of a forward pass and compare the encoder-decoder paradigm with sequential architectures. The lecture concludes with a Q&A session, providing an opportunity to clarify complex concepts and deepen understanding of attention mechanisms and their implementation in PyTorch.
Syllabus
– Week 12 – Practicum
– Attention
– Key-value store
– Transformer and PyTorch implementation
– Q&A
Taught by
Alfredo Canziani