Overview
Explore the critical role of retrieval evaluation in Language Model (LLM)-based applications like RAG in this 13-minute conference talk by Atita Arora, presented at the MLOps.community AI in Production event. Delve into the correlation between retrieval accuracy and answer quality, and understand the importance of thorough evaluation methodologies. Learn from Arora's 15 years of experience as a Solution Architect and Search Relevance strategist as she shares insights on decoding complex business challenges and pioneering innovative information retrieval solutions. Gain valuable knowledge about evaluating RAGs while navigating the world of vectors and LLMs, and discover how these insights can enhance practical applications and effectiveness in real-world problem-solving.
Syllabus
Navigating via Retrieval Evaluation to Demystify LLM Wonderland // Atita Arora // AI in Production
Taught by
MLOps.community