Decoding the Brain to Help Build Machines - 2017
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore groundbreaking research on the vision-language-motor interface in the human brain and its applications to computer systems in this lecture by Jeffrey Siskind from Purdue University. Delve into fMRI investigations that reveal how the brain processes language across different modalities, including spoken sentences, text, and video. Discover how researchers can read out individual concepts, words, and even entire sentences from brain scans, and learn about the compositional mental semantic representation shared across subjects and modalities. Examine three computational systems developed based on this research: one that tracks objects in video using sentential descriptions, another that learns noun and preposition meanings from robot navigation, and a third that plays checkers using natural language instructions. Gain insights into the interdisciplinary work connecting neuroscience, computer vision, robotics, and artificial intelligence.
Syllabus
Decoding the Brain to Help Build Machines -- Jeffrey Siskind (Purdue University) - 2017
Taught by
Center for Language & Speech Processing(CLSP), JHU