Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fastformer - Additive Attention Can Be All You Need

Yannic Kilcher via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a detailed analysis of the Fastformer, a proposed efficient Transformer model for text understanding, in this 36-minute video. Dive into the architecture's key components, including additive attention and element-wise multiplication, and understand how it aims to achieve linear complexity for processing long sequences. Compare Fastformer to classic attention mechanisms, examine potential issues with the architecture, and evaluate its effectiveness through experimental results. Gain insights into the ongoing research efforts to improve Transformer models for handling long contexts efficiently.

Syllabus

- Intro & Outline
- Fastformer description
- Baseline: Classic Attention
- Fastformer architecture
- Additive Attention
- Query-Key element-wise multiplication
- Redundant modules in Fastformer
- Problems with the architecture
- Is this even attention?
- Experimental Results
- Conclusion & Comments

Taught by

Yannic Kilcher

Reviews

Start your review of Fastformer - Additive Attention Can Be All You Need

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.