Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Intro to Multi-Modal ML with OpenAI's CLIP

James Briggs via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore OpenAI's CLIP, a multi-modal model capable of understanding relationships between text and images, in this 23-minute tutorial. Learn how to use CLIP via the Hugging Face library to create text and image embeddings, perform text-image similarity searches, and explore alternative image and text search methods. Gain practical insights into multi-modal machine learning and discover the power of CLIP in bridging the gap between textual and visual data processing.

Syllabus

Intro
What is CLIP?
Getting started
Creating text embeddings
Creating image embeddings
Embedding a lot of images
Text-image similarity search
Alternative image and text search

Taught by

James Briggs

Reviews

Start your review of Intro to Multi-Modal ML with OpenAI's CLIP

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.