Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Multimodal Representation Learning for Vision and Language

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Explore multimodal representation learning for vision and language in this 56-minute lecture by Kai-Wei Chang from UCLA. Delve into the challenges of cross-modality decision-making in artificial intelligence tasks, such as answering complex questions about images. Learn about recent advances in representation learning that map data from different modalities into shared embedding spaces, enabling cross-domain knowledge transfer through vector transformations. Discover the speaker's recent efforts in building multi-modal representations for vision-language understanding, including training on weakly-supervised image captioning data and unsupervised image and text corpora. Understand how these models can ground language elements to image regions without explicit supervision. Examine a wide range of vision and language applications and discuss remaining challenges in the field. Gain insights from Dr. Chang, an associate professor at UCLA, whose research focuses on robust machine learning methods and fair, reliable language processing technologies for social good applications.

Syllabus

Multimodal Representation Learning for Vision and Language - Kai-Wei Chang (UCLA)

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Multimodal Representation Learning for Vision and Language

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.