Overview
Explore data labeling techniques for search relevance evaluation in this 54-minute conference talk by Evgeniya Sukhodolskaya, data evangelist and senior ML manager at Toloka. Dive into the ranking problem, commonly used ranking quality metrics, and human-in-the-loop approaches for obtaining relevance judgments at scale. Learn best practices and potential pitfalls in building evaluation pipelines for information retrieval. Gain insights on topics such as sampling, crowdsourcing, aggregation, and the Crowdkit N squared method. Participate in a Q&A session covering accuracy vs. data set size, expert manual labeling, and sampling algorithms. Access valuable resources and discover upcoming events in the field of search relevance evaluation.
Syllabus
Introduction
Problem
Pipeline
Metrics
Sampling
Crowdsourcing
Aggregation
Crowdkit
N squared
Questions
What about crowdsourcing
Accuracy vs data set size
Expert manual labeling
Conclusion
Resources
QA
QA Questions
Reshare
Leakage
Andreas
Sampling after sampling
Sampling algorithm
Pairwise judgments
Chat window
Upcoming events
Taught by
OpenSource Connections