Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Evaluating LLM-Based Apps - Deepchecks LLM Validation

LLMOps Space via YouTube

Overview

Explore the intricacies of evaluating Large Language Model (LLM) based applications in this comprehensive webinar featuring Shir Chorev, CTO at Deepchecks, and Yaron, VP Product at Deepchecks. Delve into crucial topics such as LLM hallucinations, evaluation methodologies, and the importance of golden sets in benchmarking LLM performance. Witness a live demonstration of the new Deepchecks LLM evaluation module, designed to address the challenges of assessing LLM-based applications. Gain insights into robust approaches for tackling hallucinations, where models generate outputs not grounded in the given context. Learn about various automated and manual evaluation techniques, and understand the significance of structuring effective golden sets for benchmarking. This 48-minute session, hosted by LLMOps Space, a global community for LLM practitioners, offers valuable knowledge for professionals working on deploying LLMs in production environments.

Syllabus

Evaluating LLM-Based Apps: New Product Release | Deepchecks LLM Validation

Taught by

LLMOps Space

Reviews

Start your review of Evaluating LLM-Based Apps - Deepchecks LLM Validation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.