Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning Multi-modal LLaVA Vision and Language Models

Trelis Research via YouTube

Overview

Learn how to fine-tune multi-modal vision and language models like LLaVA in this comprehensive tutorial. Explore the architectures of LLaVA 1.5, LLaVA 1.6, and IDEFICS, and understand their applications compared to ChatGPT. Dive into the intricacies of vision encoder architecture and multi-modal model design. Master the process of data creation, dataset preparation, and fine-tuning techniques. Gain hands-on experience with data loading, LoRA setup, and evaluation methods. Follow along with practical demonstrations on training, inference, and post-training evaluation. Clarify technical concepts and summarize key takeaways to enhance your skills in working with advanced vision and language models.

Syllabus

Fine-tuning Multi-modal Models
Overview
LLaVA vs ChatGPT
Applications
Multi-modal model architecture
Vision Encoder architecture
LLaVA 1.5 architecture
LLaVA 1.6 architecture
IDEFICS architecture
Data creation
Dataset creation
Fine-tuning
Inference and Evaluation
Data loading
LoRA setup
Recap so far
Training
Evaluation post-training
Technical clarifications
Summary

Taught by

Trelis Research

Reviews

Start your review of Fine-tuning Multi-modal LLaVA Vision and Language Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.