Rethinking and Improving Relative Position Encoding for Vision Transformer - Lecture 23

Rethinking and Improving Relative Position Encoding for Vision Transformer - Lecture 23

UCF CRCV via YouTube Direct link

Shared v.s. Unshared

12 of 18

12 of 18

Shared v.s. Unshared

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Rethinking and Improving Relative Position Encoding for Vision Transformer - Lecture 23

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Background and previous work
  3. 3 Self-attention
  4. 4 Absolute Position Encoding and Relative Position Encoding (RPE)
  5. 5 RPE in Transformer-XL
  6. 6 Bias and Contextual Mode
  7. 7 A Piecewise Index Function
  8. 8 2D Relative Position Calculation
  9. 9 Experiments
  10. 10 Implementation details
  11. 11 Directed vs. Undirected Bias vs. Contextual
  12. 12 Shared v.s. Unshared
  13. 13 Piecewise v.s. Clip
  14. 14 Number of buckets
  15. 15 Component-wise analysis
  16. 16 Complexity Analysis
  17. 17 Visualization
  18. 18 Conclusion

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.