Overview
Join a roundtable discussion featuring four machine learning experts from Tecton as they delve into the challenges and best practices of getting ML applications into production. With over 35 years of combined experience in MLOps at companies like AWS, Google, Lyft, and Uber, the panelists share insights on overcoming bottlenecks caused by organizational structure, use cases, and tech stacks. Learn about essential ML infrastructure, integrated batch and stream processing, scaling AI from zero, and identifying red flags in technology stacks. Discover strategies for feature quality monitoring, building recommender system tools, and quantifying the business value of machine learning. Gain valuable knowledge from industry veterans to efficiently deploy your ML applications in production environments.
Syllabus
[] Introduction to Kevin Stumpf, Derek Salama, Eddie Esquivel, and Isaac Cameron
[] Challenges of traditional classical ML into production
[] Infrastructure cost
[] Bridging Business and Tech
[] ML Infrastructure Essentials
[] Integrated Batch and Stream
[] Scaling AI from Zero
[] Stacks red flags
[] Tecton: Features Quality Monitoring
[] Building Recommender System Tools
[] Quantify business value in ML
[] Wrap up
Taught by
MLOps.community