Overview
Explore the principles of observability and their application in Large Language Model (LLM) applications in this 19-minute conference talk by Guangya Liu and Jean Detoeuf from IBM. Gain insights into the importance of monitoring AI behaviors as LLMs become increasingly prevalent in various applications. Discover why users demand transparency in AI decision-making processes and how observability addresses these concerns. Learn about key metrics to observe in LLM applications, including model latency, cost, and tracking. Examine emerging technologies such as Traceloop, OpenTelemetry, and Langfuse, and understand how to leverage these tools for analytics, monitoring, and optimization of LLM applications. Delve into the methods for refining LLM performance, uncovering biases, troubleshooting problems, and ensuring AI reliability and trustworthiness through effective observability practices.
Syllabus
Examining the Principles of Observability and Its Relevance in LLM... - Guangya Liu & Jean Detoeuf
Taught by
Linux Foundation