Neural Radiance Fields for View Synthesis

Neural Radiance Fields for View Synthesis

Andreas Geiger via YouTube Direct link

Intro

1 of 31

1 of 31

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Neural Radiance Fields for View Synthesis

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 The problem of novel view interpolation
  3. 3 GB-alpha volume rendering for view synthesis
  4. 4 eural networks as a continuous shape representation
  5. 5 Neural network replaces large N-d array
  6. 6 enerate views with traditional volume rendering
  7. 7 igma parametrization for continuous opacity
  8. 8 Two pass rendering: coarse
  9. 9 Two pass rendering: fine
  10. 10 Viewing directions as input
  11. 11 Volume rendering is trivially differentiable
  12. 12 ptimize with gradient descent on rendering loss
  13. 13 HeRF encodes convincing view-dependent effects using directional dependence
  14. 14 NeRF encodes detailed scene geometry
  15. 15 Going forward
  16. 16 Fourier Features Let Networks Learn
  17. 17 Input Mapping
  18. 18 Key Points
  19. 19 sing kernel regression to approximate deep networks
  20. 20 TK: modeling deep network as kernel regression
  21. 21 Sinusoidal mapping results in a composed stationary NTK
  22. 22 Resulting composed NTK is stationary
  23. 23 No-mapping NTK clearly not stationary
  24. 24 Toy example of stationarity in practice
  25. 25 Modifying mapping manipulates kernel spectrum
  26. 26 ernel spectrum has dramatic effect on convergence and generalization
  27. 27 requency sampling distribution bandwidth matter more than shape
  28. 28 Mapping Code
  29. 29 2D Images
  30. 30 3D Shape
  31. 31 Indirect supervision tasks Ground Truth

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.