Overview
Explore implicit neural scene representations in this 57-minute talk by Vincent Sitzmann at the Tübingen seminar series of the Autonomous Vision Group. Delve into the implications of signal representation for algorithm development, examining alternatives to discrete representations like pixel grids and point clouds. Learn about embedding implicit scene representations in neural rendering frameworks and leveraging gradient-based meta-learning for fast inference. Discover how these techniques enable 3D reconstruction from a single 2D image and generate features useful for semantic segmentation. Gain insights into the potential of neural scene representations for independent agent reasoning and complex scene modeling from limited observations.
Syllabus
Introduction
Implicit Neural Representation
Why does that not work
Sinusoidal Representation Networks
Audio Signals
Scene Reconstruction
Different Models
Deep Boxes
Implicit Mule Representation
Mule Renderer
Learning Priors
Few Shot Reconstruction
Generalizing
Complex Scenes
Related 3D Scenes
AutoDecoder
Meta SDF Fitness
Test Time
Comparison
Distance Functions
Semisupervised Approach
Recap
Future work
Acknowledgements
Taught by
Andreas Geiger