Overview
Learn how to implement scalable and privacy-focused LLM monitoring in production environments through this 17-minute conference talk from Google I/O Extended AI Seattle. Explore WhyLabs developer Sage Elliott's insights into LLM Observability using LangKit, an open-source toolkit that enables detection of toxic language, sentiment analysis, jailbreak attempts, sensitive data leakage, and hallucinations across various LLM platforms including LangChain, HuggingFace, MosaicML, OpenAI, and Falcon. Gain practical knowledge about ML monitoring and AI Observability fundamentals, understand common LLM challenges and their solutions, and discover how to leverage Langkit's extracted and custom language metrics for production-scale monitoring. Follow along with real-world examples and implementation strategies that demonstrate the ease of integrating LangKit into existing LLM applications.
Syllabus
Introduction
What is ML monitoring & Al Observability?
Large Language Model (LLMs)
Common Pain Points with LLMs
How to solve those problems
Solving at Scale: Langkit (Open Source)
Extracted Language Metrics
Langkit is easy to use!
Custom Language Metrics
Options for monitoring data and ML models at production scale over time
Resources + Quick Demo (if time)
Taught by
AICamp