Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Can Wikipedia Help Offline Reinforcement Learning - Author Interview

Yannic Kilcher via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore an in-depth interview with authors Machel Reid and Yutaro Yamada discussing their research on leveraging pre-trained language models for offline reinforcement learning. Delve into the experimental results, challenges, and insights gained from applying Wikipedia-trained models to control and game environments. Learn about the potential of transferring knowledge between generative modeling tasks across different domains, the impact on convergence speed and performance, and the implications for future research in reinforcement learning and sequence modeling. Gain valuable perspectives on model architectures, attention patterns, computational requirements, and practical advice for getting started in this emerging field.

Syllabus

- Intro
- Brief paper, setup & idea recap
- Main experimental results & high standard deviations
- Why is there no clear winner?
- Why are bigger models not a lot better?
- What’s behind the name ChibiT?
- Why is iGPT underperforming?
- How are tokens distributed in Reinforcement Learning?
- What other domains could have good properties to transfer?
- A deeper dive into the models' attention patterns
- Codebase, model sizes, and compute requirements
- Scaling behavior of pre-trained models
- What did not work out in this project?
- How can people get started and where to go next?

Taught by

Yannic Kilcher

Reviews

Start your review of Can Wikipedia Help Offline Reinforcement Learning - Author Interview

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.