Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Neural Radiance Fields for View Synthesis

Andreas Geiger via YouTube

Overview

Explore a comprehensive talk on Neural Radiance Fields (NeRFs) for view synthesis, presented by Matthew Tancik from UC Berkeley at the Tübingen seminar series of the Autonomous Vision Group. Delve into state-of-the-art techniques for synthesizing novel views of complex scenes using sparse input views and optimizing continuous volumetric scene functions. Learn about the combination of coordinate-based neural representations with classic volumetric rendering methods, and discover the importance of mapping input coordinates to higher-dimensional spaces using Fourier features. Gain insights into Neural Tangent Kernel analysis and its equivalence to transforming networks into stationary kernels with tunable bandwidth. Examine the problem of novel view interpolation, GB-alpha volume rendering, neural networks as continuous shape representations, and the optimization process using gradient descent on rendering loss. Investigate the concept of Fourier Features and their impact on network learning, including input mapping, kernel regression approximation, and the effects of kernel spectrum on convergence and generalization. Explore practical applications in 2D images, 3D shapes, and indirect supervision tasks.

Syllabus

Intro
The problem of novel view interpolation
GB-alpha volume rendering for view synthesis
eural networks as a continuous shape representation
Neural network replaces large N-d array
enerate views with traditional volume rendering
igma parametrization for continuous opacity
Two pass rendering: coarse
Two pass rendering: fine
Viewing directions as input
Volume rendering is trivially differentiable
ptimize with gradient descent on rendering loss
HeRF encodes convincing view-dependent effects using directional dependence
NeRF encodes detailed scene geometry
Going forward
Fourier Features Let Networks Learn
Input Mapping
Key Points
sing kernel regression to approximate deep networks
TK: modeling deep network as kernel regression
Sinusoidal mapping results in a composed stationary NTK
Resulting composed NTK is stationary
No-mapping NTK clearly not stationary
Toy example of stationarity in practice
Modifying mapping manipulates kernel spectrum
ernel spectrum has dramatic effect on convergence and generalization
requency sampling distribution bandwidth matter more than shape
Mapping Code
2D Images
3D Shape
Indirect supervision tasks Ground Truth

Taught by

Andreas Geiger

Reviews

Start your review of Neural Radiance Fields for View Synthesis

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.