Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive review of BLIP (Bootstrapping Language-Image Pre-training), a groundbreaking framework for unified vision-language understanding and generation. Delve into the intricacies of cross-modal pre-training, examining how BLIP addresses issues like low-quality datasets and limited fine-tuning capabilities. Learn about the model's architecture, data flow, and parameter sharing between modules. Discover the innovative captioning and filtering bootstrapping process, and understand how BLIP achieves state-of-the-art results in various vision-language tasks. Gain insights into its application to video-language tasks and its potential impact on the field of artificial intelligence.
Syllabus
- Intro
- Sponsor: Zeta Alpha
- Paper Overview
- Vision-Language Pre-Training
- Contributions of the paper
- Model architecture: many parts for many tasks
- How data flows in the model
- Parameter sharing between the modules
- Captioning & Filtering bootstrapping
- Fine-tuning the model for downstream tasks
Taught by
Yannic Kilcher