Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Linux Foundation

Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech

Linux Foundation via YouTube

Overview

Explore the challenges and solutions encountered when scaling WhisperSpeech models to over 80,000 hours of speech in this informative conference talk. Discover the importance of small-scale experiments, maximizing GPU utilization, and transitioning from single to multi-GPU training. Learn about the significant performance improvements achieved through WebDataset implementation and strategies for effortlessly scaling AI models. Gain insights into GPU procurement options and understand the differences between consumer and professional-grade GPUs. Delve into the process of creating high-quality, open-source text-to-speech models based on cutting-edge research from major AI labs, and uncover the lessons learned in developing state-of-the-art speech synthesis capabilities.

Syllabus

Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech - Jakub CÅ‚apa, Collabora

Taught by

Linux Foundation

Reviews

Start your review of Tricks Learned from Scaling WhisperSpeech Models to 80k+ Hours of Speech

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.