Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Examining the Principles of Observability and Its Relevance in LLM Applications

Linux Foundation via YouTube

Overview

Explore the principles of observability and their application in Large Language Model (LLM) applications in this 19-minute conference talk by Guangya Liu and Jean Detoeuf from IBM. Gain insights into the importance of monitoring AI behaviors as LLMs become increasingly prevalent in various applications. Discover why users demand transparency in AI decision-making processes and how observability addresses these concerns. Learn about key metrics to observe in LLM applications, including model latency, cost, and tracking. Examine emerging technologies such as Traceloop, OpenTelemetry, and Langfuse, and understand how to leverage these tools for analytics, monitoring, and optimization of LLM applications. Delve into the methods for refining LLM performance, uncovering biases, troubleshooting problems, and ensuring AI reliability and trustworthiness through effective observability practices.

Syllabus

Examining the Principles of Observability and Its Relevance in LLM... - Guangya Liu & Jean Detoeuf

Taught by

Linux Foundation

Reviews

Start your review of Examining the Principles of Observability and Its Relevance in LLM Applications

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.