Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Stochastic Bandits: Foundations and Current Perspectives

Simons Institute via YouTube

Overview

Explore the foundations and current perspectives of stochastic bandits in this comprehensive lecture by Shipra Agrawal from Columbia University. Delve into the fundamental model of sequential learning, where rewards from different actions are assumed to come identically and independently from fixed distributions. Gain insights into the main algorithms for stochastic bandits, including Upper Confidence Bound and Thompson Sampling. Discover how these algorithms can be adapted to incorporate various additional constraints. This talk, part of the Data-Driven Decision Processes Boot Camp at the Simons Institute, provides a thorough examination of this crucial topic in sequential learning and decision-making processes.

Syllabus

Stochastic Bandits: Foundations and Current Perspectives

Taught by

Simons Institute

Reviews

Start your review of Stochastic Bandits: Foundations and Current Perspectives

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.