Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Train Short, Test Long - Attention With Linear Biases Enables Input Length Extrapolation

Yannic Kilcher via YouTube

Overview

Explore the innovative ALiBi (Attention with Linear Biases) method for improving sequence extrapolation in transformer models. Dive into the limitations of traditional position encodings and discover how ALiBi's simple yet effective approach allows for efficient extrapolation to longer sequences than seen during training. Learn about the implementation details, including how to choose the slope parameter, and examine experimental results demonstrating ALiBi's performance advantages. Gain insights into why this method leads to better outcomes and understand its potential impact on natural language processing tasks.

Syllabus

- Intro & Overview
- Position Encodings in Transformers
- Sinusoidial Position Encodings
- ALiBi Position Encodings
- How to choose the slope parameter
- Experimental Results
- Comments & Conclusion

Taught by

Yannic Kilcher

Reviews

Start your review of Train Short, Test Long - Attention With Linear Biases Enables Input Length Extrapolation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.