Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Explore the comparative analysis of visual representations in vision-and-language models versus vision-only models in this 10-minute conference talk from EACL 2024. Delve into the research conducted by Zhuowan Li from the Center for Language & Speech Processing at JHU, which probes a wide range of tasks to assess the quality of learned representations. Discover intriguing findings that suggest vision-and-language models excel in label prediction tasks like object and attribute prediction, while vision-only models demonstrate superior performance in dense prediction tasks requiring more localized information. Gain insights into the role of language in visual learning and obtain an empirical guide for various pretrained models, contributing to the ongoing discussion about the effectiveness of joint learning paradigms in understanding individual modalities.

Syllabus

Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models - EACL 2024

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Localization vs. Semantics: Visual Representations in Unimodal and Multimodal Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.