Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Decision Transformer - Reinforcement Learning via Sequence Modeling

Yannic Kilcher via YouTube

Overview

Explore a comprehensive video explanation of the research paper "Decision Transformer: Reinforcement Learning via Sequence Modeling." Delve into the innovative approach of framing offline reinforcement learning as a sequence modeling problem, leveraging the power of Transformer architectures. Learn about the Decision Transformer model, which generates optimal actions by conditioning on desired returns, past states, and actions. Discover how this method compares to traditional value function and policy gradient approaches in reinforcement learning. Examine key concepts such as offline reinforcement learning, temporal difference learning, reward-to-go, and the context length problem. Analyze experimental results on various benchmarks and gain insights into the potential implications of this research for the field of reinforcement learning.

Syllabus

- Intro & Overview
- Offline Reinforcement Learning
- Transformers in RL
- Value Functions and Temporal Difference Learning
- Sequence Modeling and Reward-to-go
- Why this is ideal for offline RL
- The context length problem
- Toy example: Shortest path from random walks
- Discount factors
- Experimental Results
- Do you need to know the best possible reward?
- Key-to-door toy experiment
- Comments & Conclusion

Taught by

Yannic Kilcher

Reviews

Start your review of Decision Transformer - Reinforcement Learning via Sequence Modeling

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.