Large Scale Machine Learning and Convex Optimization - Lecture 3

Large Scale Machine Learning and Convex Optimization - Lecture 3

Hausdorff Center for Mathematics via YouTube Direct link

Intro

1 of 18

1 of 18

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Large Scale Machine Learning and Convex Optimization - Lecture 3

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Main motivating examples
  3. 3 Subgradient method/descent (Shor et al., 1985)
  4. 4 Subgradient descent for machine learning Assumptions is the expected risk, the empirical risk
  5. 5 Summary: minimizing convex functions
  6. 6 Relationship to online learning
  7. 7 Stochastic subgradient "descent" /method
  8. 8 Convex stochastic approximation Existing work • Known global minimax rates of convergence for non-smooth problems (Nemirovsky and Yudin, 1983; Agarwal et al., 2012)
  9. 9 Robustness to wrong constants for = Cn
  10. 10 Robustness to lack of strong convexity
  11. 11 Beyond stochastic gradient method
  12. 12 Outline
  13. 13 Adaptive algorithm for logistic regression
  14. 14 Self-concordance
  15. 15 Least-mean-square algorithm
  16. 16 Markov chain interpretation of constant step sizes
  17. 17 Least-squares - Proof technique
  18. 18 Simulations - synthetic examples

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.