Learn how to enhance observability for Large Language Models (LLMs) in Apache systems through a 23-minute technical presentation by Observability R&D Engineer Li Yanhong. Explore implementation strategies using OpenTelemetry (OTel) and automated Python probe injection techniques to monitor and analyze LLM performance within Apache environments. Gain insights into advanced observability practices that enable better tracking, debugging, and optimization of LLM applications while leveraging open-source monitoring solutions.
Overview
Syllabus
Enhancing the Observability of LLM within the Apache: Based on OTel and Auto Python Probe Injection
Taught by
The ASF