Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Understanding Task Vectors in Vision-Language Models - Cross-Modal Representations

Discover AI via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore groundbreaking research from UC Berkeley examining how vision-and-language models (VLMs) develop and employ "task vectors" - internal representations enabling cross-modal task performance. Dive into the discovery of how these latent activations capture task essences in a shared space across text and image modalities, allowing models to apply tasks learned in one format to queries in another. Learn about the three-phase query processing system where tokens evolve from raw inputs to task-specific representations and finally to answer-aligned vectors. Understand how combining instruction- and example-based task vectors creates more efficient representations for handling complex scenarios with limited data. Examine experimental evidence showing how text-based instruction vectors can guide image queries, leading to improved performance over traditional unimodal approaches. Discover the implications of this research for developing more adaptable and context-aware AI systems that use unified task embeddings for cross-modal inference.

Syllabus

Inside the VLM: NEW "Task Vectors" emerge (UC Berkeley)

Taught by

Discover AI

Reviews

Start your review of Understanding Task Vectors in Vision-Language Models - Cross-Modal Representations

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.