Completed
Deep learning-based methods can lean from data in the wild
Class Central Classrooms beta
YouTube videos curated by Class Central.
Classroom Contents
Vision, Touch & Sound for Robustness & Generalizability in Robotic Manipulation
Automatically move to the next video in the Classroom when playback concludes
- 1 HAI Weekly Seminar
- 2 Previous work
- 3 Experimental setup
- 4 Learning generalizable representatie
- 5 Dynamics prediction from self-supervi
- 6 How is each modality used?
- 7 Overview of our method
- 8 Lessons Learned
- 9 Overview of today's talk
- 10 Related works
- 11 Crossmodal Compensation Model CC
- 12 Training CCM
- 13 Corrupted sensor detection during deploy
- 14 CCM Task Success Rates
- 15 Model-based methods fit physically interpretable parameters
- 16 Deep learning-based methods can lean from data in the wild
- 17 Differentiable audio rendering can learn interpretable parameters from data in the wild
- 18 Difflmpact gets the best of both worlds impact sounds
- 19 Physically interpretable parameters are easier to reuse
- 20 Decomposing an impact sound is an posed problem
- 21 Modeling rigid object impact forces
- 22 Parameterizing contact forces
- 23 Optimize an L1 loss on magnitude spectrograms
- 24 Analysis by Synthesis Experiment
- 25 Analysis by Synthesis: Ceramic Mug
- 26 End-to-End Learning ASMR: Ceramic Plate
- 27 Robot Source Separation Experiment
- 28 Steel Fork and Ceramic Mug
- 29 Difflmpact's Key Takeaways
- 30 Conclusions
- 31 Thank you for your Attention