How to Systematically Test and Evaluate LLM Apps - MLOps Podcast

How to Systematically Test and Evaluate LLM Apps - MLOps Podcast

MLOps.community via YouTube Direct link

[] Micro Macro collaboration in AI

13 of 20

13 of 20

[] Micro Macro collaboration in AI

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

How to Systematically Test and Evaluate LLM Apps - MLOps Podcast

Automatically move to the next video in the Classroom when playback concludes

  1. 1 [] Gideon's preferred coffee
  2. 2 [] Takeaways
  3. 3 [] A huge shout-out to Comet ML for sponsoring this episode!
  4. 4 [] Please like, share, leave a review, and subscribe to our MLOps channels!
  5. 5 [] Evaluation metrics in AI
  6. 6 [] LLM Evaluation in Practice
  7. 7 [] LLM testing methodologies
  8. 8 [] LLM as a judge
  9. 9 [] OPIC track function overview
  10. 10 [] Tracking user response value
  11. 11 [] Exploring AI metrics integration
  12. 12 [] Experiment tracking and LLMs
  13. 13 [] Micro Macro collaboration in AI
  14. 14 [] RAG Pipeline Reproducibility Snapshot
  15. 15 [] Collaborative experiment tracking
  16. 16 [] Feature flags in CI/CD
  17. 17 [] Labeling challenges and solutions
  18. 18 [] LLM output quality alerts
  19. 19 [] Anomaly detection in model outputs
  20. 20 [] Wrap up

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.