Completed
- Intro
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Byte Latent Transformers - Understanding Meta's BLT Model for Efficient Language Processing
Automatically move to the next video in the Classroom when playback concludes
- 1 - Intro
- 2 - Intro to Transformers
- 3 - Subword Tokenizers
- 4 - Embeddings
- 5 - How does vocab size impact Transformer FLOPs?
- 6 - Byte Encodings
- 7 - Pros and Cons of Byte Tokens
- 8 - Patches
- 9 - Entropy
- 10 - Entropy model
- 11 - Dynamically Allocate Compute
- 12 - Latent Space
- 13 - BLT Architecture
- 14 - Local Encoder
- 15 - Latent Transformer and Local Decoder in BLT
- 16 - Outro