Completed
Sinusoidal mapping results in a composed stationary NTK
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Neural Radiance Fields for View Synthesis
Automatically move to the next video in the Classroom when playback concludes
- 1 Intro
- 2 The problem of novel view interpolation
- 3 GB-alpha volume rendering for view synthesis
- 4 eural networks as a continuous shape representation
- 5 Neural network replaces large N-d array
- 6 enerate views with traditional volume rendering
- 7 igma parametrization for continuous opacity
- 8 Two pass rendering: coarse
- 9 Two pass rendering: fine
- 10 Viewing directions as input
- 11 Volume rendering is trivially differentiable
- 12 ptimize with gradient descent on rendering loss
- 13 HeRF encodes convincing view-dependent effects using directional dependence
- 14 NeRF encodes detailed scene geometry
- 15 Going forward
- 16 Fourier Features Let Networks Learn
- 17 Input Mapping
- 18 Key Points
- 19 sing kernel regression to approximate deep networks
- 20 TK: modeling deep network as kernel regression
- 21 Sinusoidal mapping results in a composed stationary NTK
- 22 Resulting composed NTK is stationary
- 23 No-mapping NTK clearly not stationary
- 24 Toy example of stationarity in practice
- 25 Modifying mapping manipulates kernel spectrum
- 26 ernel spectrum has dramatic effect on convergence and generalization
- 27 requency sampling distribution bandwidth matter more than shape
- 28 Mapping Code
- 29 2D Images
- 30 3D Shape
- 31 Indirect supervision tasks Ground Truth