BLIP- Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation

BLIP- Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation

Yannic Kilcher via YouTube Direct link

- Sponsor: Zeta Alpha

2 of 10

2 of 10

- Sponsor: Zeta Alpha

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

BLIP- Bootstrapping Language-Image Pre-Training for Unified Vision-Language Understanding and Generation

Automatically move to the next video in the Classroom when playback concludes

  1. 1 - Intro
  2. 2 - Sponsor: Zeta Alpha
  3. 3 - Paper Overview
  4. 4 - Vision-Language Pre-Training
  5. 5 - Contributions of the paper
  6. 6 - Model architecture: many parts for many tasks
  7. 7 - How data flows in the model
  8. 8 - Parameter sharing between the modules
  9. 9 - Captioning & Filtering bootstrapping
  10. 10 - Fine-tuning the model for downstream tasks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.