Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a groundbreaking approach to reinforcement learning in this 18-minute video analysis of the research paper "Curiosity-driven Exploration by Self-supervised Prediction." Dive into the innovative method that formulates curiosity as an intrinsic reward signal, enabling agents to explore environments with sparse or absent extrinsic rewards. Discover how this technique scales to high-dimensional continuous state spaces, bypasses pixel prediction challenges, and focuses on environment aspects that directly affect the agent. Examine the evaluation of this approach in VizDoom and Super Mario Bros, covering three key scenarios: sparse extrinsic reward, exploration without extrinsic reward, and generalization to unseen scenarios. Gain insights into how curiosity-driven exploration enhances learning efficiency, reduces environmental interactions, and improves adaptability to new challenges in reinforcement learning tasks.
Syllabus
Curiosity-driven Exploration by Self-supervised Prediction
Taught by
Yannic Kilcher