Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Navigate the iterative development of Large Language Model (LLM) applications and explore the intricacies of LLMOps design in this 39-minute AI in Production talk by Yinxi Zhang from Databricks. Discover strategies for anchoring LLM development in practical business use cases and leveraging your own data effectively. Learn how to incorporate Continuous Integration and Continuous Deployment (CI/CD) as a core component for LLM pipeline deployment, similar to Machine Learning Operations (MLOps). Address unique challenges posed by LLMs, including data security, API governance, GPU infrastructure requirements for inference, integration with external vector databases, and the lack of clear evaluation rubrics. Gain insights into overcoming these obstacles and making strategic adaptations. Explore reference architectures for seamless productionization of RAGs on the Databricks Lakehouse platform. Join Yinxi Zhang, a Staff Data Scientist at Databricks with extensive experience in building end-to-end AI solutions across various industries, as she shares her expertise in charting the LLMOps odyssey.
Syllabus
Charting LLMOps Odyssey // Yinxi Zhang @ Databricks // AI in Production Talk
Taught by
MLOps.community