A Smoothed Analysis of the Greedy Algorithm for Linear Contextual Bandits - Theory Seminar
Paul G. Allen School via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a theory seminar on the smoothed analysis of the greedy algorithm in bandit learning. Delve into the tension between exploration and exploitation, particularly in high-stakes decision-making scenarios involving individuals. Examine how the greedy algorithm, which prioritizes immediate optimal decisions, can be analyzed in linear contextual bandit problems. Learn about the smoothed analysis approach, which demonstrates that small perturbations in adversarial context choices can lead to "no regret" performance. Investigate the implications for balancing exploration and exploitation in slightly perturbed environments. Cover topics such as classic algorithm design, online algorithms, online ML algorithms, single and multi-parameter models, regret analysis, diversity, and margins. Gain insights into the potential benefits and applications of greedy algorithms in various settings.
Syllabus
Intro
meta-question
Classic Algorithm Design
Online Algorithms
Online ML Algorithms
Outline
Single-parameter model
Multi-parameter model
Regret wrt M
(good) performance of greedy algorithms?
Single-parameter regime
Multi-parameter regime
A change in perspective
Diversity
Margins
Why might we use greedy?
Taught by
Paul G. Allen School