Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Speech and Audio Processing in Non-Invasive Brain-Computer Interfaces at Meta

Center for Language & Speech Processing(CLSP), JHU via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the potential of non-invasive neural interfaces in transforming human-computer interaction through this 43-minute talk by Michael Mandel from Reality Labs at Meta. Delve into the development of an interface for controlling augmented reality devices using electromyographic (EMG) signals captured at the wrist. Discover how speech and audio technologies are uniquely suited to unlocking the full potential of these signals and interactions. Learn about the neuroscientific background necessary to understand these signals, and examine automatic speech recognition-inspired interfaces for generating text and beamforming-inspired interfaces for identifying individual neurons. Gain insights into how these technologies connect with egocentric machine intelligence tasks that could be implemented on augmented reality devices. Understand the potential for creating effortless and joyful interfaces that provide low friction, information-rich, and always available inputs for users.

Syllabus

Speech and Audio Processing in Non-Invasive Brain-Computer Interfaces at Meta [Michael Mandel]

Taught by

Center for Language & Speech Processing(CLSP), JHU

Reviews

Start your review of Speech and Audio Processing in Non-Invasive Brain-Computer Interfaces at Meta

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.