Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Constraining 3D Fields for Reconstruction and View Synthesis

Andreas Geiger via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore cutting-edge techniques in 3D field reconstruction and view synthesis in this 27-minute talk from the ECCV 2022 workshop. Dive into three innovative approaches: RegNeRF, which addresses shape-appearance ambiguity and improves results with limited input views; MonoSDF, a method for monocular 3D reconstruction utilizing geometric cues; and TensoRF, introducing tensorial radiance fields for fast training and efficient representation. Learn about scene space annealing, depth map prediction from single images, and various tensor decomposition techniques. Gain insights into the challenges of 3D reconstruction and how these novel methods overcome them, illustrated with ablation studies and comparisons across different datasets.

Syllabus

Neural Shape and Appearance Representations
NeRF Results with 3 Input Views
Shape-Appearance Ambiguity
RegNeRF: Overview
RegNeRF: Scene Space Annealing
RegNeRF: Ablation Study
3D Reconstruction is an ill-posed Problem
Depth Map Prediction from a Single Image
OmniData: Vision Data from 3D Scans
MonoSDF: Monocular Geometric Cues for Reconstruction
MonoSDF: Ablation Study on Replica Dataset
MonoSDF: Ablation Study on ScanNet
TensoRF: Tensorial Radiance Fields
TensoRF: 4D Representation - CANDECOMP/PARAFAC (CP) vs.
TensoRF: Fast Training
TensoRF: Tensor Decomposition
TensoRF: CP Decomposition
TensoRF: CP vs. VM Decomposition
TensoRF: VM Decomposition

Taught by

Andreas Geiger

Reviews

Start your review of Constraining 3D Fields for Reconstruction and View Synthesis

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.