Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore challenging problems in robotics through this lecture from the Theory of Reinforcement Learning Boot Camp. Delve into topics such as backflips, models, experiments, perception, and Q functions. Examine deep models, data vs physics approaches, state space, and use cases for online optimization and state estimation. Investigate physics models, Lagrangian state parameters, gradient-based policy search, and static output feedback. Learn about over-parameterization, Eula parameters, and lessons from roboticists. Analyze the 2D Hopper, rare event simulation, traditional approaches, and failure cases. Discover insights on hopping robots, manipulation, planar grippers, robot simulators, occupation measures, convergence, and the relationship between language and state in linear models.
Syllabus
Introduction
Backflips
Models
Experiments
Perception
Types of Models
Q Functions
Deep Models
Data vs Physics
State Space
Use Cases
Online Optimization
State Estimation
Physics Models
Lagrangian State Parameters
Gradientbased Policy Search
L4DC
Static Output Feedback
OverParameterization
Eula Parameters
Lessons from Roboticists
The 2D Hopper
Rare Event Simulation
Traditional Approach
Failure Cases
Hopping Robot
Manipulation Notes
planar gripper
robot simulators
how to run a robot
occupation measures
convergence
language and state
linear models
Taught by
Simons Institute