Understanding Task Vectors in Vision-Language Models - Cross-Modal Representations
Discover AI via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore groundbreaking research from UC Berkeley examining how vision-and-language models (VLMs) develop and employ "task vectors" - internal representations enabling cross-modal task performance. Dive into the discovery of how these latent activations capture task essences in a shared space across text and image modalities, allowing models to apply tasks learned in one format to queries in another. Learn about the three-phase query processing system where tokens evolve from raw inputs to task-specific representations and finally to answer-aligned vectors. Understand how combining instruction- and example-based task vectors creates more efficient representations for handling complex scenarios with limited data. Examine experimental evidence showing how text-based instruction vectors can guide image queries, leading to improved performance over traditional unimodal approaches. Discover the implications of this research for developing more adaptable and context-aware AI systems that use unified task embeddings for cross-modal inference.
Syllabus
Inside the VLM: NEW "Task Vectors" emerge (UC Berkeley)
Taught by
Discover AI