Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intricacies of evaluating Large Language Model (LLM) based applications in this comprehensive webinar featuring Shir Chorev, CTO at Deepchecks, and Yaron, VP Product at Deepchecks. Delve into crucial topics such as LLM hallucinations, evaluation methodologies, and the importance of golden sets in benchmarking LLM performance. Witness a live demonstration of the new Deepchecks LLM evaluation module, designed to address the challenges of assessing LLM-based applications. Gain insights into robust approaches for tackling hallucinations, where models generate outputs not grounded in the given context. Learn about various automated and manual evaluation techniques, and understand the significance of structuring effective golden sets for benchmarking. This 48-minute session, hosted by LLMOps Space, a global community for LLM practitioners, offers valuable knowledge for professionals working on deploying LLMs in production environments.
Syllabus
Evaluating LLM-Based Apps: New Product Release | Deepchecks LLM Validation
Taught by
LLMOps Space