Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

M2-RAAP: A Multi-Modal Recipe for Advancing Adaptation-based Pre-training for Video-text Retrieval

Association for Computing Machinery (ACM) via YouTube

Overview

Explore a 14-minute conference presentation from SIGIR 2024 that introduces M2-RAAP, an innovative multi-modal approach for advancing adaptation-based pre-training in zero-shot video-text retrieval. Learn about the methodology developed by researchers Xingning Dong, Zipeng Feng, Chunluan Zhou, Xuzheng Yu, Ming Yang, and Qingpei Guo that aims to enhance the effectiveness and efficiency of matching video content with textual descriptions without prior training examples. Understand how this cutting-edge research presented at the Association for Computing Machinery (ACM) conference contributes to the field of multimedia information retrieval and machine learning.

Syllabus

SIGIR 2024 W1.6 [fp] M2-RAAP: A Multi-Modal Recipe for Advancing Adaptation-based Pre-training

Taught by

Association for Computing Machinery (ACM)

Reviews

Start your review of M2-RAAP: A Multi-Modal Recipe for Advancing Adaptation-based Pre-training for Video-text Retrieval

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.