Completed
Online to Batch Assumption: stochastic setting
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Online Learning and Bandits - Part 1
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 Positioning this Tutorial
- 3 Working Definitions
- 4 Full Information Online Learning
- 5 Setup
- 6 OCO Problem
- 7 Design Principle
- 8 Online Gradient Descent (OGD) Algorithm
- 9 Online Gradient Descent Result
- 10 Proof of OGD regret bound (ctd)
- 11 OGD Discussion
- 12 From Learning Parameters to Picking Actions
- 13 Let's apply what we know
- 14 Exponential Weigths / Hedge Algorithm Algorithm: Exponential Weights (EW)
- 15 EW Analysis Applying Hoeding's Lemma to the loss of each round gives
- 16 Summary so far Balancing act "model complexity vs "overfitting
- 17 FTRL/MD "sneak peek"
- 18 FTRL/MD sneak peak performance Algorithm: Follow the Regularised Leader (FTRL)
- 19 Quadratic Losses
- 20 Curvature assumptions
- 21 ONS Algorithm
- 22 ONS Performance
- 23 ONS Discussion
- 24 Offline Optimisation
- 25 Online to Batch Assumption: stochastic setting
- 26 Computing Saddle Points
- 27 Application 3: Saddle Point Algorithm Algorithm: approximate saddle point solver
- 28 Application 3: Saddle Point Analysis
- 29 Conclusion