Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

On Sparse Linear Programming and Simple Neural Networks

Institute for Pure & Applied Mathematics (IPAM) via YouTube

Overview

Explore a thought-provoking lecture on the intersection of linear programming, sparse inference constraints, and neural networks. Delve into two key instances where these optimization and estimation problems naturally arise in neural network analysis. Discover how overparametrized learning with shallow ReLU networks can be reduced to a finite-dimensional linear program, bridging the gap between NP-hard worst-case scenarios and practical gradient-based learning solutions. Examine the computational perspective of sparse inference, comparing traditional L1-relaxed linear programming approaches with iterative shrinkage algorithms and their unrolled deep, narrow network counterparts. Investigate whether depth is truly necessary for sparse inference tasks through approximation lower bounds using shallow and wide networks. Gain insights from this collaborative research presented by Joan Bruna from New York University, part of the Deep Learning and Combinatorial Optimization 2021 series at the Institute for Pure and Applied Mathematics, UCLA.

Syllabus

Joan Bruna: "On Sparse Linear Programming and (simple) neural networks"

Taught by

Institute for Pure & Applied Mathematics (IPAM)

Reviews

Start your review of On Sparse Linear Programming and Simple Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.