Vision, Touch & Sound for Robustness & Generalizability in Robotic Manipulation

Vision, Touch & Sound for Robustness & Generalizability in Robotic Manipulation

Stanford HAI via YouTube Direct link

Overview of today's talk

9 of 31

9 of 31

Overview of today's talk

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Vision, Touch & Sound for Robustness & Generalizability in Robotic Manipulation

Automatically move to the next video in the Classroom when playback concludes

  1. 1 HAI Weekly Seminar
  2. 2 Previous work
  3. 3 Experimental setup
  4. 4 Learning generalizable representatie
  5. 5 Dynamics prediction from self-supervi
  6. 6 How is each modality used?
  7. 7 Overview of our method
  8. 8 Lessons Learned
  9. 9 Overview of today's talk
  10. 10 Related works
  11. 11 Crossmodal Compensation Model CC
  12. 12 Training CCM
  13. 13 Corrupted sensor detection during deploy
  14. 14 CCM Task Success Rates
  15. 15 Model-based methods fit physically interpretable parameters
  16. 16 Deep learning-based methods can lean from data in the wild
  17. 17 Differentiable audio rendering can learn interpretable parameters from data in the wild
  18. 18 Difflmpact gets the best of both worlds impact sounds
  19. 19 Physically interpretable parameters are easier to reuse
  20. 20 Decomposing an impact sound is an posed problem
  21. 21 Modeling rigid object impact forces
  22. 22 Parameterizing contact forces
  23. 23 Optimize an L1 loss on magnitude spectrograms
  24. 24 Analysis by Synthesis Experiment
  25. 25 Analysis by Synthesis: Ceramic Mug
  26. 26 End-to-End Learning ASMR: Ceramic Plate
  27. 27 Robot Source Separation Experiment
  28. 28 Steel Fork and Ceramic Mug
  29. 29 Difflmpact's Key Takeaways
  30. 30 Conclusions
  31. 31 Thank you for your Attention

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.