Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the challenges of providing Large Language Models (LLMs) as a Service in this 12-minute lightning talk from the LLMs in Production Conference. Delve into key issues including scalability, model optimization, cost-effectiveness, and data privacy. Learn from Hemant Jain, a Machine Learning Inference expert at Cohere AI with experience developing NVIDIA's Triton Inference Server. Gain insights into the complexities of model footprint, fine-tuning, and the importance of balancing performance with resource management in the evolving landscape of LLM deployment and service delivery.
Syllabus
Introduction
Model Footprint
Fine Tuning
Cost
Model Optimization
Data Privacy
Taught by
MLOps.community