Overview
Explore the concept of Adversarial Bandits with Knapsacks in this 21-minute IEEE conference talk. Delve into dynamic pricing, prior work on Stochastic BwK, and various feedback models. Examine the main results, challenges, and reasons behind the complexity of BwK and Adversarial BwK. Learn about linear relaxation, Lagrange games, and the main algorithm (MAIN). Discover regret bounds, simple algorithms, and the differences between high-probability and adaptive adversary scenarios. Conclude with extensions and potential future work in this field.
Syllabus
Intro
(Motivation) Dynamic Pricing
Bandits w/ Knapsacks (BWK)
Prior Work - Stochastic BwK
Background: Feedback Models
Main Result
Why is BwK hard?
Why is Adversarial BwK harder?
Benchmark
Overview
Linear Relaxation
Lagrange Game
a: Main algorithm (MAIN)
Step 3b: Learning in Games
Regret Bound
Challenges
Simple Algorithm
High-prob. v/s Adaptive Adversary
Extensions
Future Work
Taught by
IEEE FOCS: Foundations of Computer Science