Neuromorphic Language Understanding - Deep Neural Networks for Low-Power Language Processing
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Explore the intersection of recurrent neural networks and biologically-inspired models of language understanding in this 2015 talk by Guido Zarrella from MITRE Corporation. Discover how deep neural networks are being applied to solve language processing tasks and learn about their adaptation to run on ultra-low power neuromorphic hardware that simulates neuron spiking. Examine a proof-of-concept interactive embedded system that uses recurrent neural networks for language processing while consuming minimal power. Gain insights into the challenges and advancements in bridging artificial neural networks with cognitive models for language comprehension. Delve into Zarrella's work on unsupervised learning of meaning and intent in informal language, drawing from his experience since his undergraduate research at Carnegie Mellon University.
Syllabus
Neuromorphic Language Understanding – Guido Zarrella (MITRE Corporation) - 2015
Taught by
Center for Language & Speech Processing(CLSP), JHU