Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive analysis of a research paper examining the potential of Wikipedia to enhance offline reinforcement learning. Delve into the innovative approach of treating reinforcement learning as sequence modeling, leveraging pre-trained language models to improve performance in control and game tasks. Discover how this method accelerates training by 3-6 times and achieves state-of-the-art results across various environments. Gain insights into the experimental findings, attention pattern analysis, and scaling properties of this novel technique. Understand the implications for bridging the gap between language modeling and reinforcement learning, opening new avenues for knowledge transfer between seemingly disparate domains.
Syllabus
- Intro
- Paper Overview
- Offline Reinforcement Learning as Sequence Modelling
- Input Embedding Alignment & other additions
- Main experimental results
- Analysis of the attention patterns across models
- More experimental results scaling properties, ablations, etc.
- Final thoughts
Taught by
Yannic Kilcher