Dorylus - Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

Dorylus - Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

USENIX via YouTube Direct link

Minimizing Effects of Asynchrony on Convergence

20 of 27

20 of 27

Minimizing Effects of Asynchrony on Convergence

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Dorylus - Affordable, Scalable, and Accurate GNN Training with Distributed CPU Servers and Serverless Threads

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Machine Learning
  3. 3 Graph Neural Networks
  4. 4 Stages of a Graph Neural Network
  5. 5 GPUs Are Not a Good Fit for Graph Operations
  6. 6 Combining CPUs and GPUs is Cost-Ineffective
  7. 7 Using Many CPU Servers Can Still Be Expensive
  8. 8 Key Insight: Serverless Fits Our Goals
  9. 9 Serverless Achieves Low-Cost, Scalable Efficiency
  10. 10 Challenges with Using Serverless
  11. 11 Challenge 1: Limited Resources
  12. 12 Solution: Computation Separation
  13. 13 Dorylus Architecture
  14. 14 Flow of Decomposed Tasks
  15. 15 Challenge 2: Limited Network
  16. 16 Solution: Create Pipeline of Decomposed Tasks
  17. 17 Data Chunks Moving Through Layer of Pipeline
  18. 18 Synchronize after Scatter Hinders Pipeline
  19. 19 Two Sync Points Makes Asynchrony Difficult
  20. 20 Minimizing Effects of Asynchrony on Convergence
  21. 21 Serverless Optimizations
  22. 22 Data Graphs
  23. 23 We Evaluated Several Aspects of Dorylus
  24. 24 High Value on Large-Sparse Graphs
  25. 25 Dorylus Outperforms Existing Systems
  26. 26 Dorylus Scales Full Graph Training
  27. 27 Conclusion: Dorylus Provides Value

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.