Robot Learning in the Era of Large Pretrained Models - Stanford Seminar
Stanford University via YouTube
Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the intersection of robot learning and large pretrained models in this Stanford seminar featuring Dorsa Sadigh. Delve into the benefits of interactive robot learning in the context of foundation models, examining two key perspectives. Learn about the role of pretraining in developing visual representations and how language can guide the creation of grounded visual representations for robotics tasks. Investigate the importance of dataset selection during pretraining, including strategies for guiding large-scale data collection and identifying high-quality data for imitation learning. Discover recent work on enabling compositional generalization of learned policies through guided data collection. Conclude by exploring innovative ways to leverage the rich context of large language models and vision-language models in robotics applications. This 56-minute seminar, part of Stanford University's Robotics and Autonomous Systems series, offers valuable insights into the evolving field of robot learning and its integration with advanced AI models.
Syllabus
Stanford Seminar - Robot Learning in the Era of Large Pretrained Models
Taught by
Stanford Online