Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the topological constraints of neural network architectures in this hour-long lecture from the Applied Algebraic Topology Network. Delve into the classical understanding of neural networks as approximators of functions on compact sets, then examine recent breakthroughs by Johnson, Hanin, and Sellke that reveal how network architecture fundamentally limits representational capabilities. Learn about explicit topological obstructions to function representation in neural networks and gain insights into ongoing research developing a general theory of architectural constraints on topological expressiveness. Acquire a foundational understanding of neural networks and their topological properties, with potential applications in machine learning and data science.
Syllabus
Eli Grigsby (11/13/19): On the topological expressiveness of neural networks
Taught by
Applied Algebraic Topology Network