Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Hyperparameter Optimization for Reinforcement Learning Using Meta's Ax

Digi-Key via YouTube

Overview

Explore hyperparameter optimization for reinforcement learning using Meta's Ax framework in this comprehensive 58-minute tutorial. Learn about the three basic HPO techniques: grid search, random search, and Bayesian optimization. Dive into practical implementation using Python, including package installation, environment setup, and configuration of Weights & Biases. Follow along as the instructor demonstrates loading and testing a pendulum gymnasium environment, defining trials for agent training and testing, and setting up an Ax experiment for Bayesian optimization. Gain insights into debugging, training agents with optimized hyperparameters, and running additional trials. Conclude with an introduction to Weights & Biases sweeps for further optimization techniques.

Syllabus

- Introduction
- What are hyperparameters
- Hyperparameter optimization loop
- Grid search
- Random search
- Bayesian optimization
- Install Python packages
- Import Python packages
- Configure Weights & Biases
- Set deterministic mode
- Load pendulum gymnasium environment
- Test pendulum environment
- Test random actions with dummy agent
- Testing and logging callbacks
- Define trial to train and test an agent
- Define project settings and hyperparameter ranges
- Create gymnasium environment
- Define Ax experiment to perform Bayesian optimization for hyperparameters
- Perform hyperparameter optimization and debugging
- Train agent with best hyperparameters
- Test agent
- Run additional trials
- Weights & Biases sweeps

Taught by

Digi-Key

Reviews

Start your review of Hyperparameter Optimization for Reinforcement Learning Using Meta's Ax

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.