Multilingual Representations for Low-Resource Speech Processing

Multilingual Representations for Low-Resource Speech Processing

MITCBMM via YouTube Direct link

Three Use Cases

20 of 24

20 of 24

Three Use Cases

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Multilingual Representations for Low-Resource Speech Processing

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Why Care About Low-Resource Speech Processing?
  3. 3 How Much Transcribed Audio Do We Need?
  4. 4 Why Do We Need All That Training Data?
  5. 5 Multilingual Features
  6. 6 The IARPA Babel Program
  7. 7 Babel Languages
  8. 8 Limited resources
  9. 9 What is keyword search, and why focus on it?
  10. 10 How do we measure keyword search performance?
  11. 11 Properties of term-weighted value
  12. 12 Take-Home Messages
  13. 13 Three Ways of Looking at Speech
  14. 14 Deep Neural Network
  15. 15 A Stacked DNN Architecture
  16. 16 Convolutional Neural Network
  17. 17 Considered 2 CNN Architectures
  18. 18 Recurrent Neural Network
  19. 19 Bidirectional LSTM Architecture
  20. 20 Three Use Cases
  21. 21 More Expressive Architectures Make a Big Difference
  22. 22 Fixed Features Allow for Rapid Development
  23. 23 Our partners
  24. 24 Babel resources

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.