Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

University of Central Florida

AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

University of Central Florida via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the innovative AttnGAN model for fine-grained text-to-image generation in this 46-minute lecture from the University of Central Florida. Delve into the architecture's key components, including the text encoder, conditioning augmentation, generator, attention network, and image encoder. Examine the DAMSM loss and its role in improving image quality. Learn about experimental results on various datasets, evaluation metrics like Inception score, and component analysis. Discover the model's capabilities in generating novel scenarios and understand its limitations in capturing global coherent structure. Gain insights into the challenges and advancements in text-to-image synthesis using attentional generative adversarial networks.

Syllabus

Intro
Problem: Text-to-image
Related work
Architecture - Motivation
Architecture - Text Encoder
Architecture - Conditioning Augmentation
Architecture - Generator F.
Architecture - Attention network Fatin
Architecture - Image Encoder
Architecture - DAMSM loss
Experiments - Datasets
Experiments - Evaluation • Inception score
Experiments - Component Analysis
Experiments - Qualitative (CUB)
Experiments - Novel scenarios
Experiments - Failure cases Did not capture global coherent structure

Taught by

UCF CRCV

Reviews

Start your review of AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.