Overview
Explore an insightful 30-minute AutoML seminar that delves into automating hyperparameter optimization in gradient-based machine learning algorithms. Learn how to automatically compute hypergradients through a straightforward modification to backpropagation, eliminating the need for manual hyperparameter tuning. Discover the recursive application of this method to optimize hyper-hyperparameters indefinitely, resulting in increasingly robust optimization processes that become less dependent on initial hyperparameter choices. Examine practical implementations and experimental results across Multiple Layer Perceptrons (MLPs), Convolutional Neural Networks (CNNs), and Recurrent Neural Networks (RNNs), complete with a PyTorch implementation demonstration. Master the concepts behind this elegant solution that streamlines the optimization of step sizes, momentum coefficients, and other crucial hyperparameters in machine learning models.
Syllabus
Kartik Chandra / Audrey Xie: "Gradient Descent: The Ultimate Optimizer"
Taught by
AutoML Seminars