Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore adaptive sampling techniques through sequential decision-making in this 59-minute lecture by András György from the Alan Turing Institute. Delve into the theoretical foundations of learning, focusing on methods that intersect statistics, probability, and optimization. Discover how multi-armed bandit algorithms can be applied to select unbiased Monte Carlo samplers sequentially, aiming to minimize mean-squared error. Examine the challenges of extending this approach to Markov-chain Monte Carlo (MCMC) samplers, including proper sample quality measurement and handling of slowly mixing chains and multimodal target distributions. Learn about an asymptotically consistent adaptive MCMC algorithm that can significantly accelerate sampling, particularly for multimodal target distributions. Gain insights from experimental results demonstrating the algorithm's effectiveness in various scenarios.