Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

RWKV- Reinventing RNNs for the Transformer Era

Yannic Kilcher via YouTube

Overview

Explore an in-depth analysis of the Receptance Weighted Key Value (RWKV) model, a groundbreaking architecture that bridges the gap between Transformers and Recurrent Neural Networks (RNNs). Delve into the evolution of linear attention mechanisms, RWKV's layer structure, and its ability to combine efficient parallelizable training with streamlined inference. Examine experimental results, limitations, and visualizations that demonstrate RWKV's performance compared to similarly sized Transformers. Gain insights into this innovative approach that reconciles computational efficiency and model performance in sequence processing tasks, potentially shaping the future of natural language processing.

Syllabus

- Introduction
- Fully Connected In-Person Conference in SF June 7th
- Transformers vs RNNs
- RWKV: Best of both worlds
- LSTMs
- Evolution of RWKV's Linear Attention
- RWKV's Layer Structure
- Time-Parallel vs Sequence Mode
- Experimental Results & Limitations
- Visualizations
- Conclusion

Taught by

Yannic Kilcher

Reviews

Start your review of RWKV- Reinventing RNNs for the Transformer Era

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.