Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

How to Systematically Test and Evaluate LLM Apps - MLOps Podcast

MLOps.community via YouTube

Overview

Explore a comprehensive podcast episode featuring Gideon Mendels, CEO of Comet, discussing systematic testing and evaluation of LLM applications. Gain insights into hybrid approaches combining ML and software engineering best practices, defining evaluation metrics, and tracking experimentation for LLM app development. Learn about comprehensive unit testing strategies for confident deployment, and discover the importance of managing machine learning workflows from experimentation to production. Delve into topics such as LLM evaluation methodologies, AI metrics integration, experiment tracking, collaborative approaches, and anomaly detection in model outputs. Benefit from Mendels' expertise in NLP, speech recognition, and ML research as he shares valuable insights for developers working with LLM applications.

Syllabus

[] Gideon's preferred coffee
[] Takeaways
[] A huge shout-out to Comet ML for sponsoring this episode!
[] Please like, share, leave a review, and subscribe to our MLOps channels!
[] Evaluation metrics in AI
[] LLM Evaluation in Practice
[] LLM testing methodologies
[] LLM as a judge
[] OPIC track function overview
[] Tracking user response value
[] Exploring AI metrics integration
[] Experiment tracking and LLMs
[] Micro Macro collaboration in AI
[] RAG Pipeline Reproducibility Snapshot
[] Collaborative experiment tracking
[] Feature flags in CI/CD
[] Labeling challenges and solutions
[] LLM output quality alerts
[] Anomaly detection in model outputs
[] Wrap up

Taught by

MLOps.community

Reviews

Start your review of How to Systematically Test and Evaluate LLM Apps - MLOps Podcast

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.