Overview
Explore the advantages of deploying task-specific AI models in this 39-minute conference talk by Devvret Rishi from Predibase. Delve into the growing trend of organizations opting for specialized, fine-tuned LLMs over large, generalized models like ChatGPT. Learn about the cost-effectiveness and improved latency of smaller, task-specific AI solutions. Discover the declarative ML framework developed with Ludwig at Predibase, designed to simplify AI model building for engineers. Gain insights into the motivations behind this approach and the technical details of fine-tuning popular open-source LLMs like Llama2. Understand how to deploy these models in a cost-effective manner using the innovative LoRAX technique, enabling organizations to create tailored AI solutions for their specific needs.
Syllabus
The Future is Fine-Tuned: Deploying Task-specific LLMs - Devvret Rishi, Predibase
Taught by
Linux Foundation