Transfer Learning and Transformers - Full Stack Deep Learning - Spring 2021

Transfer Learning and Transformers - Full Stack Deep Learning - Spring 2021

The Full Stack via YouTube Direct link

- Attention in Detail: Masked Self-Attention, Positional Encoding, and Layer Normalization

6 of 9

6 of 9

- Attention in Detail: Masked Self-Attention, Positional Encoding, and Layer Normalization

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Transfer Learning and Transformers - Full Stack Deep Learning - Spring 2021

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - Introduction
  2. 2 - Transfer Learning in Computer Vision
  3. 3 - Embeddings and Language Models
  4. 4 - NLP's ImageNet moment: ELMO and ULMFit on datasets like SQuAD, SNLI, and GLUE
  5. 5 - Rise of Transformers
  6. 6 - Attention in Detail: Masked Self-Attention, Positional Encoding, and Layer Normalization
  7. 7 - Transformers Variants: BERT, GPT/GPT-2/GPT-3, DistillBERT, T5, etc.
  8. 8 - GPT3 Demos
  9. 9 - Future Directions

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.