Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Deep Reinforcement Learning in the Real World - Sergey Levine

Institute for Advanced Study via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore deep reinforcement learning applications in real-world scenarios through this insightful lecture by Sergey Levine from the University of Berkeley. Delve into the challenges and solutions of off-policy reinforcement learning with large datasets, focusing on model-free and model-based approaches. Learn about QT-Opt, an off-policy Q-learning algorithm at scale, and its application in robotic grasping tasks. Discover how to address common issues in reinforcement learning, such as training on irrelevant data, and understand the potential of temporal difference models and Q-functions in learning implicit models. Gain valuable insights into optimizing over valid states and the application of model-based reinforcement learning for dexterous manipulation tasks.

Syllabus

Intro
Deep learning helps us handle unstructured environments
Reinforcement learning provides a formalism for behavior
RL has a big problem
Off-policy RL with large datasets
Off-policy model-free learning
How to solve for the Q-function?
QT-Opt: off-policy Q-learning at scale
Grasping with QT-Opt
Emergent grasping strategies
So what's the problem?
How to stop training on garbage?
How well does it work?
Off-policy model-based reinforcement learning
High-level algorithm outline
Model-based RL for dexterous manipulation
Q-Functions (can) learn models
Temporal difference models
Optimizing over valid states

Taught by

Institute for Advanced Study

Reviews

Start your review of Deep Reinforcement Learning in the Real World - Sergey Levine

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.