Shared Memory Parallelism in Julia with Multi-Threading - Parallel Depth-First Scheduling

Shared Memory Parallelism in Julia with Multi-Threading - Parallel Depth-First Scheduling

The Julia Programming Language via YouTube Direct link

Welcome!

1 of 24

1 of 24

Welcome!

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Shared Memory Parallelism in Julia with Multi-Threading - Parallel Depth-First Scheduling

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Welcome!
  2. 2 Why we need threads?
  3. 3 Task parallelism
  4. 4 Data parallelism
  5. 5 Julia's experimental threading infrastructure added in 2015/2016
  6. 6 Successes of aforementioned threading infrastructure
  7. 7 What we've learned
  8. 8 Problem is not adding threads to Julia, but making them useful at every level
  9. 9 Nested parallelism: parallel code calling function from a library that is also parallel
  10. 10 Example: multiplying two n x n matrices
  11. 11 Example: running code sequentially
  12. 12 Example: you need O(n^2) space
  13. 13 Example: running code in parallel on 4 cores with OpenMP, OMP_NESTED = 1
  14. 14 Example: such parallel code needs O(n^3) in space
  15. 15 Another way: work-stealing
  16. 16 Problem: work-stealing algorithm essentially run like a serial algorithm
  17. 17 Parallel depth-first scheduling
  18. 18 partr -- parallel task runtime
  19. 19 partr implementation
  20. 20 partr -- priority queues
  21. 21 partr -- handling nested parallelism
  22. 22 Possible problem: we do not synchronize at each spawn point
  23. 23 Why all these things are important?
  24. 24 Q&A: is Julia more suitable for implementation of partr than other languages?

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.