Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Improving Intrinsic Exploration with Language Abstractions - Machine Learning Paper Explained

Yannic Kilcher via YouTube

Overview

Explore the concept of using language abstractions to improve intrinsic exploration in reinforcement learning through this in-depth video explanation. Dive into the challenges of sparse reward environments and how language descriptions of encountered states can be used to assess novelty. Learn about the MiniGrid and MiniHack environments, and understand how states are annotated with language. Examine baseline algorithms like AMIGo and NovelD, and discover how language is integrated into these methods. Analyze experimental results and consider the implications of using language-based variants for intrinsic exploration in challenging tasks. Gain insights into the potential of natural language as a medium for highlighting relevant abstractions in reinforcement learning environments.

Syllabus

- Intro
- Paper Overview: Language for exploration
- The MiniGrid & MiniHack environments
- Annotating states with language
- Baseline algorithm: AMIGo
- Adding language to AMIGo
- Baseline algorithm: NovelD and Random Network Distillation
- Adding language to NovelD
- Aren't we just using extra data?
- Investigating the experimental results
- Final comments

Taught by

Yannic Kilcher

Reviews

Start your review of Improving Intrinsic Exploration with Language Abstractions - Machine Learning Paper Explained

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.