Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and advancements in collaborative learning with limited interaction in this 22-minute IEEE conference talk. Delve into the tight bounds for distributed exploration in bandits as presented by Chao Tao, Qin Zhang, and Yuan Zhou. Examine the problem statement, variants, and collaborative learning model, including communication steps and speedup tradeoffs. Gain insights into the technical details of non-adaptive settings, Hardings Prescription, and pyramid-like distribution. Discover new ideas, input class considerations, and the adaptive case. Conclude with a comprehensive summary of the paper's findings and their implications for machine learning and distributed systems.
Syllabus
Introduction
Challenges in Machine Learning
Problem Statement
Problem Variants
Collaborative Learning Model
Communication Step
Speedup
Tradeoffs between runs and speedup
Results
Summary
Technical Details
NonAdaptive Setting
Hardings Prescription
Pyramid Like Distribution
Technical Challenges
New Ideas
Input Class
Adaptive Case
Other Results
Paper Summary
Taught by
IEEE FOCS: Foundations of Computer Science