Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Steve Brunton via YouTube

Overview

Explore the innovative SINDy-RL framework in this 21-minute video lecture by Steve Brunton. Delve into the world of interpretable and efficient model-based reinforcement learning, combining sparse identification of nonlinear dynamics (SINDy) with deep reinforcement learning (DRL). Learn how this approach creates efficient, interpretable, and trustworthy representations of dynamics models, reward functions, and control policies. Discover the advantages of SINDy-RL over traditional DRL methods, including reduced data requirements and smaller, more interpretable control policies. Follow along as the lecture covers reinforcement learning basics, its drawbacks, dictionary learning, and the various components of SINDy-RL, including environment modeling, reward function approximation, agent design, and uncertainty quantification. Gain insights into how this method can be applied to benchmark control environments and challenging fluids problems, potentially revolutionizing control strategies in complex systems like tokamak fusion reactors and fluid dynamics.

Syllabus

Intro
What is Reinforcement Learning?
Reinforcement Learning Drawbacks
Dictionary Learning and SINDy
SINDy-RL: Environment
SINDy-RL: Reward
SINDy-RL: Agent
SINDy-RL: Uncertainty Quantification
Recap and Outro

Taught by

Steve Brunton

Reviews

Start your review of SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.