Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the innovative regularization technique called Manifold Mixup in this informative video. Learn how this method addresses common issues in standard neural networks such as un-smooth classification boundaries and overconfidence. Discover the process of interpolating hidden representations of different data points and training them to predict equally interpolated labels. Understand how Manifold Mixup encourages neural networks to predict less confidently on interpolations of hidden representations, resulting in smoother decision boundaries at multiple levels of representation. Examine the theoretical foundations behind this technique and its practical applications in improving supervised learning, robustness to single-step adversarial attacks, and test log-likelihood. Gain insights into how Manifold Mixup helps neural networks learn class-representations with fewer directions of variance, and explore its connections to previous works on information theory and generalization.