Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Fine-tuning Methods for Vector Search in Semantic Search and QA Applications

OpenSource Connections via YouTube

Overview

Explore fine-tuning techniques for vector search in this 36-minute conference talk from Haystack EU 2022. Delve into the challenges of building effective embedding models for domain-specific applications. Learn about popular fine-tuning methods for semantic search and QA, including MSE-loss, MNR-loss, multilingual knowledge distillation, TSDAE, AugSBERT, GenQ, and GPL. Understand when and how to apply these techniques based on available data and use cases. Gain insights from James Briggs, a Staff Developer Advocate at Pinecone and freelance ML Engineer, as he shares his expertise in NLP and vector search. Discover strategies for handling low-resource scenarios, unstructured text, and data augmentation techniques to improve your embedding models.

Syllabus

Intro
Welcome
Vector Search
Why finetune
What is finetuning
Multiple and exit ranking
Hard negative mining
How many pairs
Low resource scenarios
Unstructured text
Synthetic data augmentation
Asymmetric data augmentation

Taught by

OpenSource Connections

Reviews

Start your review of Fine-tuning Methods for Vector Search in Semantic Search and QA Applications

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.