Multimodal Representation Learning for Vision and Language
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Explore multimodal representation learning for vision and language in this 56-minute lecture by Kai-Wei Chang from UCLA. Delve into the challenges of cross-modality decision-making in artificial intelligence tasks, such as answering complex questions about images. Learn about recent advances in representation learning that map data from different modalities into shared embedding spaces, enabling cross-domain knowledge transfer through vector transformations. Discover the speaker's recent efforts in building multi-modal representations for vision-language understanding, including training on weakly-supervised image captioning data and unsupervised image and text corpora. Understand how these models can ground language elements to image regions without explicit supervision. Examine a wide range of vision and language applications and discuss remaining challenges in the field. Gain insights from Dr. Chang, an associate professor at UCLA, whose research focuses on robust machine learning methods and fair, reliable language processing technologies for social good applications.
Syllabus
Multimodal Representation Learning for Vision and Language - Kai-Wei Chang (UCLA)
Taught by
Center for Language & Speech Processing(CLSP), JHU