LLMOps: Fine-Tuning Video Classifier (ViViT) with Custom Data
The Machine Learning Engineer via YouTube
Overview
Learn how to fine-tune a Video Vision Transformer (ViViT) using your own dataset in this comprehensive 44-minute tutorial. Explore the process of leveraging a pretrained model by Google (google/vivit-b-16x2-kinetics400), initially trained on the Kinetics-400 dataset, and adapt it to classify videos from a different dataset. Gain hands-on experience in implementing LLMOps techniques for machine learning and data science applications. Access the accompanying code repository on GitHub to follow along and enhance your skills in video classification using state-of-the-art transformer models.
Syllabus
LLMOps: Fine Tune Video Classifier (ViViT ) with your own data #machinelearning #datascience
Taught by
The Machine Learning Engineer