Population-Based Methods for Single- and Multi-Agent Reinforcement Learning - Lecture

Population-Based Methods for Single- and Multi-Agent Reinforcement Learning - Lecture

USC Information Sciences Institute via YouTube Direct link

Welcome to the Al Seminar Series

1 of 25

1 of 25

Welcome to the Al Seminar Series

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Population-Based Methods for Single- and Multi-Agent Reinforcement Learning - Lecture

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Welcome to the Al Seminar Series
  2. 2 Reinforcement Learning (RL)
  3. 3 RL basics
  4. 4 Deep Q-learning (DQN)
  5. 5 Why use target network?
  6. 6 Why reduce estimation variance
  7. 7 Ensemble RL methods
  8. 8 Ensemble RL for variance reduction
  9. 9 MeanQ design choices
  10. 10 Combining with existing techniques
  11. 11 Experiment results (100K interaction steps)
  12. 12 Obviating the target network
  13. 13 Comparing model size and update rate
  14. 14 MeanQ: variance reduction
  15. 15 Loss of ensemble diversity
  16. 16 Linear function approximation
  17. 17 Diversity through independent sampling
  18. 18 Ongoing investigation
  19. 19 Takeaways
  20. 20 Fictitious Play
  21. 21 What to do in large dynamical environments
  22. 22 PSRO convergence properties
  23. 23 Extensive-Form Double Oracle (XDO)
  24. 24 XDO: results
  25. 25 XDO convergence properties

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.