Overview
Syllabus
Intro
Image Formation is a Complex Process
Component Problems of Inverse Rendering
Ambiguities of Inverse Rendering
Approaches to Inverse Rendering
A Canonical Challenge in Inverse Rendering
Outline
Image Formation: Rendering Equation
Background: BRDF
Background: Lighting
Large-Scale Dataset of Complex Materials
Physically-Based Rendering Layer
An Example
Comparison with Other Methods
Generalization to Real Data
Single Image Inverse Rendering
Physically Motivated Network: Rendering Layer
Synthetic Experiment: Global Illumination
Physically Motivated Network: Cascade Structure
Example: Cascade Structure
Example: Shape and Material Estimation
Example: View Synthesis
High-Quality, Photorealistic Augmented Reality
Key New Challenge: Spatially-Varying Lighting
Lighting Estimation Methods
Inverse Rendering in Indoor Scenes: Challenges
Ground Truth for Inverse Rendering Is Non-Trivial
Comparisons of Rendered Images
Compact and Effective Physical Lighting Representation
Spatially Varying Lighting Estimation: Representation
Physically-Motivated Network for Indoor Scenes
Spatially Varying Lighting Estimation: Results
Inverse Rendering in Real Indoor Scenes
Inverse Rendering: Quantitative Results
Object Insertion with Single Unconstrained Image
Object Insertion: User Studies
Material Editing with Single Unconstrained Image
Lightweight Acquisition with a Mobile Phone Camera
Physically-Motivated Deep Network
Conclusions
Taught by
Andreas Geiger