Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Teaching Language to Deaf Infants with a Robot and a Virtual Human

Association for Computing Machinery (ACM) via YouTube

Overview

Explore an innovative approach to teaching language to deaf infants using a multi-agent system combining a robot and virtual human. Delve into the challenges of providing sufficient language exposure during critical developmental periods, especially for deaf infants born to hearing parents. Examine the design and implementation of an integrated system engineered to augment language exposure for 6-12 month old infants. Discover how the team addressed the complexities of human-machine design for infants, considering the limitations of screen-based media and robots in language learning. Learn about the system's ability to provide visual language and facilitate socially contingent human conversational exchange. Analyze case studies demonstrating successful engagement of the technology with both deaf and hearing infants. Gain insights into the interdisciplinary team's combined goals, system design, robot and virtual human components, and evaluation process. Understand the design lessons learned and potential implications for future research in accessible and inclusive education for infants with hearing impairments.

Syllabus

Intro
Minimal Language Input In Deaf Infants
Design Challenge
Interdisciplinary Team
Combined Goals
System Design
Robot Design
Virtual Human Design
System Evaluation
Albert
Perception System
Interaction Design
Bella
Celia
Design Lessons
Conclusions

Taught by

ACM SIGCHI

Reviews

Start your review of Teaching Language to Deaf Infants with a Robot and a Virtual Human

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.