Neural-Symbolic VQA - Disentangling Reasoning from Vision and Language Understanding
University of Central Florida via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the innovative approach to Visual Question Answering (VQA) that disentangles reasoning from vision and language understanding in this 27-minute lecture from the University of Central Florida. Delve into the task breakdown, architecture overview, and key components such as question parsing and program execution. Examine quantitative results on CLEVR and CLEVR-Humans datasets, and discover how this neural-symbolic method extends to new scenes like Minecraft. Gain insights into the future of AI systems that can effectively combine reasoning with visual and linguistic comprehension.
Syllabus
Intro
Visual Question Answering
Task Breakdown
Architecture Overview
Question Parsing
Program Execution
Training
Quantitative results on CLEVR
CLEVR-Humans & Results
New Scenes: Minecraft
Summary
Taught by
UCF CRCV