Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Evaluation Measures for Search and Recommender Systems

James Briggs via YouTube

Overview

Explore popular offline metrics for evaluating search and recommender systems in this 31-minute video. Learn about Recall@K, Mean Reciprocal Rank (MRR), Mean Average Precision@K (MAP@K), and Normalized Discounted Cumulative Gain (NDCG@K), with Python demonstrations for each metric. Understand the importance of evaluation measures in information retrieval systems, their impact on big tech companies' success, and how to make informed design decisions. Gain insights into dataset preparation, retrieval basics, and the pros and cons of various evaluation metrics. Access additional resources, including a related Pinecone article, code notebooks, and a discounted NLP course to further enhance your knowledge in this critical area of technology.

Syllabus

Intro
Offline Metrics
Dataset and Retrieval 101
Recall@K
Recall@K in Python
Disadvantages of Recall@K
MRR
MRR in Python
MAP@K
MAP@K in Python
NDCG@K
Pros and Cons of NDCG@K
Final Thoughts

Taught by

James Briggs

Reviews

Start your review of Evaluation Measures for Search and Recommender Systems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.