Overview
Explore a thought-provoking lecture examining the theoretical challenges in understanding deep learning's successes and failures. Harvard professor Boaz Barak presents empirical evidence challenging conventional theories about how neural networks learn, covering three key findings: the similarity of internal representations across different training methods, non-monotonic learning patterns where performance temporarily degrades during training, and the complex nature of learning that cannot be simplified to layer-by-layer progression. Drawing from collaborative research with prominent scholars, delve into the intersection of approximation, optimization, and statistics to better understand deep learning's fundamental principles and limitations.
Syllabus
Delivered on Thursday, December 29th, 2022, AM
Taught by
HUJI Machine Learning Club