Large Scale Universal Speech Generative Models
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Explore the cutting-edge developments in large-scale universal speech generative models in this comprehensive lecture by Wei-Ning Hsu, a research scientist at Meta Foundational AI Research. Delve into the world of self-supervised learning and generative models for speech and audio, examining pioneering work such as HuBERT, AV-HuBERT, TextlessNLP, data2vec, wav2vec-U, textless speech translation, and Voicebox. Begin with an introduction to conventional neural speech generative models and understand their limitations in scaling to Internet-scale data. Compare the latest large-scale generative models for text and image to outline promising approaches for building scalable speech models. Discover Voicebox, the most versatile generative model for speech, trained on over 50K hours of multilingual speech using a flow-matching objective. Learn about its capabilities in monolingual/cross-lingual zero-shot TTS, holistic style conversion, transient noise removal, content editing, and diverse sample generation. Gain insights into the state-of-the-art performance and excellent run-time efficiency of Voicebox, and understand its potential impact on the field of speech generation and processing.
Syllabus
Large Scale Universal Speech Generative Models - Wei-Ning Hsu
Taught by
Center for Language & Speech Processing(CLSP), JHU