Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the groundbreaking "Lottery Ticket Hypothesis" in neural network pruning through this informative video. Delve into the stunning evidence suggesting that neural networks' effectiveness stems from their random initialization containing a nearly optimal sub-network responsible for most of the final performance. Examine how standard pruning techniques uncover subnetworks with initializations capable of effective training in isolation. Learn about the hypothesis that dense, randomly-initialized, feed-forward networks contain "winning tickets" - subnetworks that can achieve comparable test accuracy to the original network in a similar number of iterations. Discover the algorithm for identifying these winning tickets and the series of experiments supporting the hypothesis. Investigate the implications for network size reduction, improved computational performance, and faster learning in various feed-forward architectures for MNIST and CIFAR10 datasets.
Syllabus
The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
Taught by
Yannic Kilcher