Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys - 2009
Center for Language & Speech Processing(CLSP), JHU via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the complex interplay between bottom-up and top-down visual processing in a lecture by Dr. Laurent Itti from the University of Southern California. Delve into the mathematical principles and neuro-computational architectures underlying visual attentional selection in humans and monkeys. Discover how these models can be applied to real-world vision challenges using stimuli from television and video games. Learn about Dr. Itti's research on developing flexible models of visual attention that can be modulated by specific tasks. Gain insights into the comparison of model predictions with behavioral recordings from primates. Understand the importance of combining sensory signals from the environment with behavioral goals in processing complex natural environments. Examine the speaker's background in electrical engineering, computation, and neural systems, as well as his extensive research and teaching experience in artificial intelligence, robotics, and biological vision.
Syllabus
Modeling Bottom-Up and Top-Down Visual Attention in Humans and Monkeys – Laurent Itti (USC) - 2009
Taught by
Center for Language & Speech Processing(CLSP), JHU