Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore observability techniques for Large Language Models (LLMs) in this lightning talk from the LLMs in Production Conference. Learn why traditional debugging and unit testing methods fall short for LLMs and discover how to implement observability practices to enhance reliability. Gain insights into instrumenting features for rich telemetry, analyzing behavior from collected data, and leveraging observability as a key source for evaluations. Delve into topics such as natural language processing, distributed tracing, monitoring end-user experiences, and utilizing Open Telemetry. Presented by Phillip Carter, an OpenTelemetry maintainer and AI initiatives leader at Honeycomb, this talk offers valuable knowledge for those working on making LLMs more dependable in production environments.
Syllabus
Intro
Welcome
Observability
Natural Language
Results
Example
Distributed tracing
Monitoring end user experience
Open Telemetry
Taught by
MLOps.community