Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

New York University (NYU)

Generative Models for Image Synthesis

New York University (NYU) via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the cutting-edge world of generative models and image synthesis in this ECE seminar talk by NVIDIA's Jan Kautz at New York University. Delve into the remarkable progress of generative adversarial networks (GANs) and their applications in image synthesis and image-to-image translation. Learn about unsupervised techniques for translating images between domains, such as day to night, and discover how these models can synthesize entirely new images. Gain insights into the innovative use of GANs for defect detection through self-generated training data. The talk covers topics including traditional graphics pipelines, neural image synthesis, GAN examples, supervised vs unsupervised learning, interactive capabilities, and advanced concepts like unit assumption and latent space interpolation. Witness practical demonstrations of image translations, from snowy to summery scenes and sunny to rainy environments, and understand the potential of DeepInversion techniques in this rapidly evolving field of artificial intelligence.

Syllabus

Intro
TRADITIONAL GRAPHICS PIPELINE
NEURAL IMAGE SYNTHESIS
GENERATIVE ADVERSARIAL NETWORKS (GANS)
GAN EXAMPLE
ENABLING MAGINATION ABILITIES
SUPERVISED VS UNSUPERVISED
MAKING IT INTERACTIVE
UNIT ASSUMPTION: SHARED LATENT SPACE
SNOWY TO SUMMERY TRANSLATION
SUNNY TO RAINY TRANSLATION
MUNIT RESULTS
LATENT INTERPOLATION
DEEPINVERSION

Taught by

NYU Tandon School of Engineering

Reviews

Start your review of Generative Models for Image Synthesis

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.