Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Implicit Neural Representations: From Objects to 3D Scenes

Andreas Geiger via YouTube

Overview

Explore a keynote presentation on implicit neural representations for 3D scene reconstruction and understanding. Delve into advanced techniques for overcoming limitations of fully-connected network architectures in implicit approaches. Learn about a hybrid model combining neural implicit shape representation with 2D/3D convolutions for detailed object and large-scale scene reconstruction. Discover methods for capturing and manipulating visual appearance through surface light field representations. Gain insights into recent efforts in collecting real-world material information for training these models. Examine the KITTI-360 dataset, featuring 360-degree sensor data and semantic annotations for outdoor environments. Cover topics including convolutional occupancy networks, object-level and scene-level reconstruction, appearance prediction, generative models, and joint estimation of pose, geometry, and SVBRDF.

Syllabus

Intro
Collaborators
3D Representations
Limitations
Convolutional Occupancy Networks
Comparison
Object-Level Reconstruction
Training Speed
Scene-Level Reconstruction
Large-Scale Reconstruction
Key Insights
Problem Definition
Existing Representation
Overfitting to Single Objects
Single Object Experiments
Single Image Appearance Prediction
Single View Appearance Prediction
Generative Model
Materials
Joint Estimation of Pose, Geometry and SVBRDF
Qualitative Results
3D Annotations

Taught by

Andreas Geiger

Reviews

Start your review of Implicit Neural Representations: From Objects to 3D Scenes

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.