Overview
Explore techniques for detecting and identifying vulnerabilities in LLM applications in this 40-minute breakout session. Learn about the challenges of putting LLM applications into production, including hallucinations, discriminatory behavior, and prompt injection attacks. Discover the concepts of LLM app vulnerabilities and the red-teaming process. Dive deep into automated detection techniques and benchmarking methods for GenAI systems. Gain a better understanding of automated safety and security assessments tailored to LLM applications. Led by Corey Abshire, Senior AI Specialist Solutions Architect at Databricks, this talk aims to transform the journey of LLM deployment into a secure and confident stride towards innovation. Access additional resources like the LLM Compact Guide and Big Book of MLOps for further exploration.
Syllabus
Red Teaming of LLM Applications: Going from Prototype to Production
Taught by
Databricks