Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Automated Scalable Bayesian Inference via Data Summarization - 2018

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Explore advanced techniques for scalable Bayesian inference in large-scale data settings through this 59-minute lecture by MIT's Tamara Broderick. Delve into the concept of data summarization and coresets as a means to overcome computational challenges in Bayesian methods. Learn about theoretical guarantees on coreset size and approximation quality, and discover how this approach provides geometric decay in posterior approximation error. Examine the application of these techniques to both synthetic and real datasets, demonstrating significant improvements over uniform random subsampling. Gain insights into Broderick's research on developing and analyzing models for scalable Bayesian machine learning, and understand the potential impact of these methods on handling large datasets efficiently while maintaining the benefits of Bayesian inference.

Syllabus

Core" of the data set • Observe: redundancies can exist even if data isn't "tall
Roadmap
Bayesian coresets
Uniform subsampling revisited
Importance sampling
Hilbert coresets
Frank-Wolfe
Gaussian model (simulated) • 1K pts; norms, inference: closed-form
Logistic regression (simulated)
Real data experiments
Data summarization

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Automated Scalable Bayesian Inference via Data Summarization - 2018

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.