Completed
Safe Bayesian optimization
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Safe and Efficient Exploration in Reinforcement Learning
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 RL beyond simulated environments?
- 3 Tuning the Swiss Free Electron Laser [with Kirschner, Muty, Hiller, Ischebeck et al.]
- 4 Challenge: Safety Constraints
- 5 Safe optimization
- 6 Safe Bayesian optimization
- 7 Illustration of Gaussian Process Inference [cf, Rasmussen & Williams 2006]
- 8 Plausible maximizers
- 9 Certifying Safety
- 10 Confidence intervals for GPS?
- 11 Online tuning of 24 parameters
- 12 Shortcomings of Safe Opt
- 13 Safe learning for dynamical systems Koller, Berkenkamp, Turchetta, K CDC 18, 19
- 14 Stylized task
- 15 Planning with confidence bounds Koller, Berkenkamp, Turchetta, K CDC 18, 19
- 16 Forwards-propagating uncertain, nonlinear dynamics
- 17 Challenges with long-term action dependencies
- 18 Safe learning-based MPC
- 19 Experimental illustration
- 20 Scaling up: Efficient Optimistic Exploration in Deep Model based Reinforcement Learning
- 21 Optimism in Model-based Deep RL
- 22 Deep Model-based RL with Confidence: H-UCRL [Curi, Berkenkamp, K, Neurips 20]
- 23 Illustration on Inverted Pendulum
- 24 Deep RL: Mujoco Half-Cheetah
- 25 Action penalty effect
- 26 What about safety?
- 27 Safety-Gym Benchmark Suite
- 28 Which priors to choose? → PAC-Bayesian Meta Learning [Rothfuss, Fortuin, Josifoski, K, ICML 2021]
- 29 Experiments - Predictive accuracy (Regression)
- 30 Meta-Learned Priors for Bayesian Optimization
- 31 Meta-Learned Priors for Sequential Decision Making
- 32 Safe and efficient exploration in real-world RL
- 33 Acknowledgments