Overview
Syllabus
Intro
Big Neural Nets
Big Models Over-Fitting
Training with DropOut
DropOut/Connect Intuition
Theoretical Analysis of DropConnect
MNIST Results
Varying Size of Network
Varying Fraction Dropped
Comparison of Convergence Rates
Limitations of DropOut/Connect
Stochastic Pooling
Methods for Test Time
Varying Size of Training Set
Convergence / Over-Fitting
Street View House Numbers
Deconvolutional Networks
Recap: Sparse Coding (Patch-based)
Reversible Max Pooling
Single Layer Cost Function
Single Layer Inference
Effect of Sparsity
Effect of Pooling Variables
Talk Overview
Stacking the Layers
Two Layer Example
Link to Parts and Structure Models
Caltech 101 Experiments
Layer 2 Filters
Classification Results: Caltech 101
Deconvolutional + Convolutional
Summary
Taught by
UCF CRCV