Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to effectively evaluate Large Language Model (LLM) systems in this conference presentation from SnorkelCon 2024, where Snorkel AI experts Rebekah Westerlind and Venkatesh Rao deliver a 37-minute deep dive into evaluation methodologies. Master the structured workflow for assessing LLM performance, starting with the fundamental challenges of evaluation and moving through the creation of specialized benchmarks tailored to specific business needs. Explore practical techniques for defining evaluation criteria, selecting appropriate evaluators, and crafting reference prompts. Discover methods for implementing both heuristic-based and predictive evaluation models, while learning to leverage LLMs themselves for nuanced assessment tasks. Through a live demonstration, see how to apply data slicing techniques to gain granular insights into performance metrics and identify areas for improvement. Gain actionable knowledge for implementing comprehensive LLM evaluation strategies that align with enterprise objectives and drive system optimization.