Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Integrating Evidence Over Time: Conditional Models for Speech and Audio Processing

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Explore conditional models for speech and audio processing in this lecture from the Center for Language & Speech Processing at Johns Hopkins University. Delve into the concept of acoustic events as existing in a rich descriptive subspace, where dimensions can be viewed as a decomposition of the original event space. Examine how phonological features integrate to determine phonetic identity in speech recognition, and how features like harmonic energy and cross-channel correlation combine to differentiate target speech from background noise in auditory scene analysis. Learn about the successes and limitations of Conditional Random Fields models in automatic speech recognition and computational auditory scene analysis, focusing on how these log-linear methods integrate local evidence over time sequences. Gain insights from the work conducted at Ohio State University's Speech and Language Technologies Lab, presented by Assistant Professor Eric Fosler-Lussier, who brings expertise in integrating linguistic insights as priors in statistical learning systems.

Syllabus

Integrating Evidence Over Time: A Look at Conditional Models for Speech & Audio Processing - 2009

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Integrating Evidence Over Time: Conditional Models for Speech and Audio Processing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.