Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Learning 3D Reconstruction in Function Space - Long Version

Andreas Geiger via YouTube

Overview

Explore cutting-edge techniques in 3D reconstruction through this keynote presentation from CVPR 2020. Delve into neural implicit 3D representations, moving beyond traditional voxels, points, and meshes. Discover how these approaches offer compact memory usage and model complex 3D topologies at high resolutions in continuous function space. Examine the capabilities and limitations of reconstructing 3D geometry, textured models, and motion. Learn about innovative methods for learning implicit 3D models using only 2D supervision, including an analytic closed-form solution for gradient updates. Cover topics such as texture fields, occupancy flow, differentiable volumetric rendering, and neural radiance fields (NeRF). Gain insights into the future of 3D reconstruction and its applications in computer vision and graphics.

Syllabus

Intro
Traditional 3D Reconstruction Pipeline
3D Representations
Network Architecture
Training Objective
Texture Fields
Representation Power (Fit to 10 Models)
Occupancy Flow
Temporal Encoder
Loss Functions
Differentiable Volumetric Rendering
Universal Differentiable Renderer for Implicit Neural Represen
Learning Implicit Surface Light Fields
Single View Appearance Prediction
Convolutional Occupancy Networks
Deep Structured Implicit Functions
NeRF: Representing Scenes as Neural Radiance Fields
Summary

Taught by

Andreas Geiger

Reviews

Start your review of Learning 3D Reconstruction in Function Space - Long Version

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.