Building and Evaluating Prompts on Production Grade Datasets for Conversational AI
Toronto Machine Learning Series (TMLS) via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Learn how to effectively construct and evaluate prompts for production-level Large Language Model (LLM) implementations in this 29-minute conference talk from the Toronto Machine Learning Series. Explore methodologies and techniques for creating production-style datasets specifically designed for LLM tasks, with a focus on conversational AI applications. Discover practical insights from Voiceflow's Lead of Agent Performance & ML Platform Bhuvana Adur Kannan and Machine Learning Engineer Yoyo Yang as they share their experiences in developing and deploying prompt-based features. Master the challenges of prompt engineering in production environments while gaining valuable lessons learned from real-world implementations.
Syllabus
Building and Evaluating Prompts on Production Grade Datasets
Taught by
Toronto Machine Learning Series (TMLS)