Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive video explanation of the paper "Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions". Delve into the challenges of incorporating discrete probability distributions and combinatorial optimization problems with neural networks. Learn about the Implicit Maximum Likelihood Estimation (I-MLE) framework for end-to-end learning of models combining discrete exponential family distributions and differentiable neural components. Discover how I-MLE enables backpropagation through discrete algorithms, allowing combinatorial optimizers to be part of a network's forward propagation. Follow along as the video breaks down key concepts, including the straight-through estimator, encoding discrete problems as inner products, and approximating marginals via perturb-and-MAP. Gain insights into the paper's contributions, methodology, and practical applications through detailed explanations and visual aids.
Syllabus
- Intro & Overview
- Sponsor: Weights & Biases
- Problem Setup & Contributions
- Recap: Straight-Through Estimator
- Encoding the discrete problem as an inner product
- From algorithm to distribution
- Substituting the gradient
- Defining a target distribution
- Approximating marginals via perturb-and-MAP
- Entire algorithm recap
- Github Page & Example
Taught by
Yannic Kilcher