Overview
Syllabus
Introduction
Feedforward neural networks
Studying the expressivity of DNNS
Example: the ReLU activation function
ReLU networks
Universal approximation property
Why sparsely connected networks?
Same sparsity - various network shapes
Approximation with sparse networks
Direct vs inverse estimate
Notion of approximation space
Role of skip-connections
Counting neurons vs connections
Role of activation function 0
The case of spline activation functions Theorem 2
Guidelines to choose an activation ?
Rescaling equivalence with the ReLU
Benefits of depth ?
Role of depth
Set theoretic picture
Summary: Approximation with DNNS
Overall summary & perspectives
Taught by
Alan Turing Institute