Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

Stanford University

Stanford Seminar - Objects, Skills, and the Quest for Compositional Robot Autonomy

Stanford University via YouTube

Overview

Explore the future of robot autonomy in this Stanford seminar featuring Yuke Zhu from UT Austin. Delve into the integration of deep learning advances with engineering principles to create scalable autonomous systems. Learn about state-action abstractions and their role in developing a compositional autonomy stack. Discover GIGA and Ditto for learning actionable object representations, and BUDS and MAPLE for scaffolding long-horizon tasks with sensorimotor skills. Gain insights into the challenges of generalization and robustness in robot learning algorithms, and explore potential solutions for widespread deployment. Engage with discussions on future research directions aimed at building scalable robot autonomy, including topics such as the James Webb Space Telescope, neural task programming, robotic grasping, and interactive digital training.

Syllabus

Introduction
James Webb Space Telescope
Robot Learning Workflow
Complex vs Reliable
Abstraction and Composition
System Perspective
Compositional Robot Autonomy Stack
Neural Task Programming
Robotic Grasping
Characterization of Objects
GIGA
Neural Fields
Supervised Procedure
Real Reward Experiments
Body Interaction
Physical Interaction
Concrete Approach
Interactive Digital Training
Questions
First Bus
Work is First
Conclusion
Classroom
Context Principle
Maple
Grasping
Action Space
Atomic Primitives
Task Sketch
Conclusions
What we learned
Skill
AI Architecture
New Frontier
Questions and Answers

Taught by

Stanford Online

Reviews

Start your review of Stanford Seminar - Objects, Skills, and the Quest for Compositional Robot Autonomy

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.