Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Yannic Kilcher via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the groundbreaking "Lottery Ticket Hypothesis" in neural network pruning through this informative video. Delve into the stunning evidence suggesting that neural networks' effectiveness stems from their random initialization containing a nearly optimal sub-network responsible for most of the final performance. Examine how standard pruning techniques uncover subnetworks with initializations capable of effective training in isolation. Learn about the hypothesis that dense, randomly-initialized, feed-forward networks contain "winning tickets" - subnetworks that can achieve comparable test accuracy to the original network in a similar number of iterations. Discover the algorithm for identifying these winning tickets and the series of experiments supporting the hypothesis. Investigate the implications for network size reduction, improved computational performance, and faster learning in various feed-forward architectures for MNIST and CIFAR10 datasets.

Syllabus

The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Taught by

Yannic Kilcher

Reviews

Start your review of The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.