Task-Driven Network Discovery via Deep Reinforcement Learning on Embedded Spaces

Task-Driven Network Discovery via Deep Reinforcement Learning on Embedded Spaces

Institute for Pure & Applied Mathematics (IPAM) via YouTube Direct link

Intro

1 of 23

1 of 23

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Task-Driven Network Discovery via Deep Reinforcement Learning on Embedded Spaces

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 complex networks are ubiquitous
  3. 3 Working with incomplete data can skew analyses
  4. 4 The network discovery question
  5. 5 A more accurate representation
  6. 6 Some issues that make the problem difficult
  7. 7 Selective harvesting via reinforcement learning
  8. 8 Policy function
  9. 9 Modeling future reward: Return function
  10. 10 Value function
  11. 11 What are current approaches missing?
  12. 12 State space representation
  13. 13 Map network states into canonical representations
  14. 14 Training set generation for offline learning
  15. 15 Episodic training
  16. 16 The learning objective
  17. 17 Our model: Network Actor Critic (NAC)
  18. 18 Experiments: Baselines & competitors
  19. 19 Experiments: Results on real data
  20. 20 Which graph embedding to choice?
  21. 21 Wrap-up: Network Actor-Critic (NAC)
  22. 22 Control of pandemics
  23. 23 Problem and high-level overview of our system: COANET

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.