Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

BLIP- Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation

Yannic Kilcher via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive review of BLIP (Bootstrapping Language-Image Pre-training), a groundbreaking framework for unified vision-language understanding and generation. Delve into the intricacies of cross-modal pre-training, examining how BLIP addresses issues like low-quality datasets and limited fine-tuning capabilities. Learn about the model's architecture, data flow, and parameter sharing between modules. Discover the innovative captioning and filtering bootstrapping process, and understand how BLIP achieves state-of-the-art results in various vision-language tasks. Gain insights into its application to video-language tasks and its potential impact on the field of artificial intelligence.

Syllabus

- Intro
- Sponsor: Zeta Alpha
- Paper Overview
- Vision-Language Pre-Training
- Contributions of the paper
- Model architecture: many parts for many tasks
- How data flows in the model
- Parameter sharing between the modules
- Captioning & Filtering bootstrapping
- Fine-tuning the model for downstream tasks

Taught by

Yannic Kilcher

Reviews

Start your review of BLIP- Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.