Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore cutting-edge techniques for tackling overlapping speech in multi-talker Automatic Speech Recognition (ASR) applications through this 52-minute talk by Desh Raj from the Center for Language & Speech Processing at Johns Hopkins University. Delve into the world of "target-speaker" methods, starting with a traditional signal processing approach and its new GPU-accelerated implementation that dramatically speeds up meeting transcription. Learn about an innovative project leveraging wake-words for on-device target-speaker ASR, resulting in significant Word Error Rate (WER) reductions. Discover how self-supervised models can be incorporated into this paradigm to further enhance speech recognition capabilities. Gain valuable insights into overcoming challenges in creating effective ASR systems for complex audio environments such as meeting transcription and smart assistants in noisy settings.