Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Offline Reinforcement Learning and Model-Based Optimization

Simons Institute via YouTube

Overview

Explore offline reinforcement learning and model-based optimization in this 34-minute lecture by Sergey Levine from UC Berkeley. Delve into the power of predictive models and automated decision-making, focusing on data-driven reinforcement learning and model-based optimization. Learn about off-policy RL, distribution shift challenges, and Q-function lower bounds. Examine the CQL algorithm and its performance. Investigate predictive modeling and design, addressing issues with simple prediction and exploring model-based optimization problems. Discover uncertainty and extrapolation concepts, and understand model inversion networks (MINS). Analyze experimental results and gain valuable insights into these cutting-edge machine learning techniques.

Syllabus

Intro
What makes modern machine learning word
Predictive models are very powerful!
Automated decision making is very powerf
First setting: data-driven reinforcement lear
Second setting: data-driven model-based optimization
Off-policy RL: a quick primer
What's the problem?
Distribution shift in a nutshell
How do prior methods address this?
Learning with Q-function lower bounds Algorithm
Does the bound hold in practice?
How does CQL compare?
Predictive modeling and design
What's wrong with just doing prediction?
The model-based optimization problem
Uncertainty and extrapolation
What can we do?
Model inversion networks (MINS)
Putting it all together
Experimental results
Some takeaways
Some concluding remarks

Taught by

Simons Institute

Reviews

Start your review of Offline Reinforcement Learning and Model-Based Optimization

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.