Deploying GenAI Applications to Enterprises: Custom Evaluation Models and LLMOps Workflow - MLOps World
MLOps World: Machine Learning in Production via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges and solutions in deploying generative AI applications to enterprises through this 49-minute conference talk from MLOps World: Machine Learning in Production. Gain insights from Alexander Kvamme, CEO of Echo AI, and Arjun Bansal, CEO & Co-founder of Log10, as they share Echo AI's journey in deploying a conversational intelligence platform to billion-dollar retail brands. Discover how they overcame LLM accuracy issues through iterative prompt engineering and collaborative workflows. Learn about the importance of end-to-end LLMOps workflows in resolving accuracy problems and scaling enterprise customers to production. Understand the role of AI-powered assistance, such as Log10's Prompt Engineering Copilot, in systematically improving accuracy and handling increased customer demand. Delve into the infrastructure requirements for successfully deploying conversational intelligence platforms at enterprise scale, including logging, tagging, debugging, prompt optimization, feedback, fine-tuning, and seamless integration with existing AI tech stacks and developer tooling.
Syllabus
What It Actually Takes to Deploy GenAI Applications to Enterprises Custom Evaluation Models
Taught by
MLOps World: Machine Learning in Production