Overview
Explore the cutting-edge research on developing model-based reinforcement learning agents that learn from interactions and build predictive models generalizing across various objects and materials. Delve into novel representations and structural priors integrated into learning systems to model dynamics at different abstraction levels. Discover how these structures enhance model-based planning algorithms, enabling robots to accomplish complex manipulation tasks such as manipulating object piles, shaping deformable foam, and making dumplings. Learn about recent progress in D3Fields, a scene representation that is simultaneously 3D, semantic, and dynamic. Examine the integration of large language models with structured scene representations, allowing robots to perform diverse everyday tasks specified in natural language. Gain insights from Yunzhu Li, an Assistant Professor of Computer Science at the University of Illinois Urbana-Champaign, whose work intersects robotics, computer vision, and machine learning, aiming to enhance robots' ability to perceive and interact with the physical world.
Syllabus
Yunzhu Li: Learning Structured World Models From and For Physical Interactions
Taught by
Montreal Robotics