Overview
Explore the challenges and solutions encountered when scaling WhisperSpeech models to over 80,000 hours of speech in this informative conference talk. Discover the importance of small-scale experiments, maximizing GPU utilization, and transitioning from single to multi-GPU training. Learn about the significant performance improvements achieved through WebDataset implementation and strategies for effortlessly scaling AI models. Gain insights into GPU procurement options and understand the differences between consumer and professional-grade GPUs. Delve into the process of creating high-quality, open-source text-to-speech models based on cutting-edge research from major AI labs, and uncover the lessons learned in developing state-of-the-art speech synthesis capabilities.
Syllabus
Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech - Jakub CÅ‚apa, Collabora
Taught by
Linux Foundation