Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Networks that Adapt to Intrinsic Dimensionality Beyond the Domain

Inside Livermore Lab via YouTube

Overview

Explore the intricacies of deep learning networks and their ability to adapt to intrinsic dimensionality in this seminar by Alexander Cloninger from UC San Diego. Delve into the central question of network size requirements for function approximation and how data dimensionality impacts learning. Examine ReLU networks' approximation capabilities for functions with dimensionality-reducing feature maps, focusing on projections onto low-dimensional submanifolds and distances to low-dimensional sets. Discover how deep nets remain faithful to an intrinsic dimension governed by the function rather than domain complexity. Investigate connections to two-sample testing, manifold autoencoders, and data generation. Learn about Dr. Cloninger's research in geometric data analysis and applied harmonic analysis, exploring applications in imaging, medicine, and artificial intelligence.

Syllabus

Introduction
Speaker Introduction
Overview
Neural Networks
The Curse of Dimensionality
Theory
Main Question
Manifold Learning Community
Reach of a Manifold
Linear Regression
Approximation Theory
Classification
Excess Risk
Recent Work
Chart Auto Encoders
Neural Network Construction
Linear Encoders
Clustered Data
Questions
Conclusion
Hybrid Seminar

Taught by

Inside Livermore Lab

Reviews

Start your review of Networks that Adapt to Intrinsic Dimensionality Beyond the Domain

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.