Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

Rethinking and Improving Relative Position Encoding for Vision Transformer - Lecture 23

University of Central Florida via YouTube

Overview

Explore the advancements in Relative Position Encoding (RPE) for Vision Transformers in this 32-minute lecture from the University of Central Florida's CAP6412 2022 series. Delve into the background of self-attention mechanisms and position encoding techniques, focusing on the evolution from absolute to relative position encoding. Examine the improvements made to RPE, including the introduction of bias and contextual modes, a piecewise index function, and 2D relative position calculation. Analyze experimental results comparing directed vs. undirected approaches, shared vs. unshared implementations, and the impact of different numbers of buckets. Gain insights into component-wise analysis, complexity considerations, and visualizations that demonstrate the effectiveness of these enhancements. Conclude with a comprehensive understanding of how these innovations contribute to the performance of Vision Transformers in various computer vision tasks.

Syllabus

Intro
Background and previous work
Self-attention
Absolute Position Encoding and Relative Position Encoding (RPE)
RPE in Transformer-XL
Bias and Contextual Mode
A Piecewise Index Function
2D Relative Position Calculation
Experiments
Implementation details
Directed vs. Undirected Bias vs. Contextual
Shared v.s. Unshared
Piecewise v.s. Clip
Number of buckets
Component-wise analysis
Complexity Analysis
Visualization
Conclusion

Taught by

UCF CRCV

Reviews

Start your review of Rethinking and Improving Relative Position Encoding for Vision Transformer - Lecture 23

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.