Explore a groundbreaking 25-minute conference talk from POPL 2024 on Automatic Parallelism Management. Delve into innovative techniques combining static compilation and run-time approaches to optimize parallel programming in high-level languages. Learn how researchers from Carnegie Mellon University and Rochester Institute of Technology tackle the challenges of minimizing parallelism costs while maximizing performance benefits. Discover a novel compiler pipeline that embeds 'potential parallelism' into the call-stack and a complementary run-time system for dynamic task creation. Understand how these methods preserve asymptotic properties of parallel programs while eliminating manual optimization burdens. Gain insights into the implementation of these techniques in the MPL compiler for Parallel ML and their practical performance implications.
Overview
Syllabus
[POPL'24] Automatic Parallelism Management
Taught by
ACM SIGPLAN