Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex

Paul G. Allen School via YouTube

Overview

Explore cutting-edge developments in gradient-based optimization for large-scale statistical data analysis in this lecture by Michael I. Jordan, a distinguished professor from UC Berkeley. Delve into three key areas: a novel framework for understanding Nesterov acceleration using continuous-time and Lagrangian perspectives, efficient methods for escaping saddle points in nonconvex optimization, and the acceleration of Langevin diffusion. Gain insights from Jordan's interdisciplinary approach bridging computational, statistical, cognitive, and biological sciences. Learn from a renowned expert who has received numerous accolades, including membership in the National Academy of Sciences and the ACM/AAAI Allen Newell Award.

Syllabus

Taskar Memorial Lecture 2018: M. Jordan (UC, Berkeley)

Taught by

Paul G. Allen School

Reviews

Start your review of On Gradient-Based Optimization: Accelerated, Stochastic and Nonconvex

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.