Completed
How is eye position estimated
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Computational Principles of Sensorimotor Control - Lecture 1
Automatically move to the next video in the Classroom when playback concludes
- 1 Complexity of human movement control
- 2 Modest success in robotics: Manipulation
- 3 Normative approach to human movement control
- 4 Reverse-engineering sensorimotor control
- 5 Motor planning
- 6 Arm movements: Paths
- 7 Eye movements: saccades
- 8 Models
- 9 The Assumption of Optimality
- 10 The ideal cost for goal-directed movement
- 11 Motor noise is signal-dependent
- 12 Signal-dependent noise and optimal control
- 13 Pointing movements: minimize variability
- 14 Motor control in the late
- 15 The demise of the desired trajectory
- 16 Motor control in the early
- 17 Optimal Feedback Control Todorov, Kappen
- 18 Optimal control and planning
- 19 State estimation Interpreting the uncertain state of the world
- 20 Generative model of state evolution
- 21 Kalman filter is the Bayesian estimator
- 22 Motor prediction with forward model
- 23 How is eye position estimated
- 24 Motor prediction
- 25 Types of Kalman estimation problems
- 26 Minimizing delays
- 27 Types of Motor Learning
- 28 Representations in motor learning
- 29 Mechanistic models
- 30 Normative models
- 31 Impedance
- 32 Measuring stiffness
- 33 Controlling stiffness
- 34 Bayesian Decision Theory
- 35 Sensorimotor learning and Bayes rule
- 36 Loss Functions in movement
- 37 Virtual pea shooter
- 38 Predictions
- 39 Loss function is robust to outliers
- 40 Imposed loss function
- 41 Summary