Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive video explanation of the research paper "Decision Transformer: Reinforcement Learning via Sequence Modeling." Delve into the innovative approach of framing offline reinforcement learning as a sequence modeling problem, leveraging the power of Transformer architectures. Learn about the Decision Transformer model, which generates optimal actions by conditioning on desired returns, past states, and actions. Discover how this method compares to traditional value function and policy gradient approaches in reinforcement learning. Examine key concepts such as offline reinforcement learning, temporal difference learning, reward-to-go, and the context length problem. Analyze experimental results on various benchmarks and gain insights into the potential implications of this research for the field of reinforcement learning.
Syllabus
- Intro & Overview
- Offline Reinforcement Learning
- Transformers in RL
- Value Functions and Temporal Difference Learning
- Sequence Modeling and Reward-to-go
- Why this is ideal for offline RL
- The context length problem
- Toy example: Shortest path from random walks
- Discount factors
- Experimental Results
- Do you need to know the best possible reward?
- Key-to-door toy experiment
- Comments & Conclusion
Taught by
Yannic Kilcher