Overview
Explore the challenges and advancements in interactive imitation learning for robots working alongside humans in this Stanford seminar. Dive into feedback-driven covariate shift and predicting human intent, examining unified distribution matching frameworks and graph neural network approaches. Learn how these methods contribute to self-driving technology deployed at scale. Discover solutions for adapting to individual human preferences, improving online learning, and understanding natural human interactions. Gain insights into the complexities of programming rules, Markov decision processes, and the limitations of infinite data. Analyze quantitative plots, non-realizable expert simulations, and driving simulators to understand the practical applications of these concepts. Conclude with an exploration of grammar modes, merging scenarios, and transformer networks in the context of interactive imitation learning.
Syllabus
Intro
Welcome
The Question
Two Fundamental Challenges
Aurora Driver
Programming Rules
Markov Decision Process
Challenges
Feedback Drives Covariate Shift
How common is this problem
Feedback driving covariate shift
Benchmarks
Infinite Data Limit
Hard Setting
Dagger
Interactive Expert
Expert Intervention Learning
Quantitative Plots
NonRealizable Expert
Simulation
Question Querying
Driving Simulators
Open Questions
Example
Grammar Modes
Merging Scenario
Transformer Net
Conclusion
Taught by
Stanford Online