Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the evolution of transfer learning in this 49-minute lecture from the Full Stack Deep Learning Spring 2021 series. Dive into the origins of transfer learning in computer vision and its application to Natural Language Processing through embeddings. Discover NLP's breakthrough moment with ELMO and ULMFit, and their impact on datasets like SQuAD, SNLI, and GLUE. Delve into the rise of Transformers, understanding key concepts such as masked self-attention, positional encoding, and layer normalization. Examine various Transformer variants including BERT, GPT series, DistillBERT, and T5. Witness impressive GPT-3 demonstrations and gain insights into future directions in the field of transfer learning and transformers.
Syllabus
- Introduction
- Transfer Learning in Computer Vision
- Embeddings and Language Models
- NLP's ImageNet moment: ELMO and ULMFit on datasets like SQuAD, SNLI, and GLUE
- Rise of Transformers
- Attention in Detail: Masked Self-Attention, Positional Encoding, and Layer Normalization
- Transformers Variants: BERT, GPT/GPT-2/GPT-3, DistillBERT, T5, etc.
- GPT3 Demos
- Future Directions
Taught by
The Full Stack