Overview
Explore a 14-minute conference talk from OSDI '21 that delves into P3, a system designed for scaling Graph Neural Network (GNN) model training to large real-world graphs in distributed environments. Learn about the unique challenges faced when training GNNs on massive graphs with billions of nodes and edges. Discover how P3's innovative approach eliminates high communication and partitioning overheads while introducing a pipelined push-pull parallelism execution strategy for accelerated model training. Understand the simple API that P3 offers, allowing for the implementation of various GNN architectures. Examine how P3, combined with a basic caching strategy, outperforms existing state-of-the-art distributed GNN frameworks by up to 7 times. The talk covers an introduction to GNNs, graph processing literature, hybrid parallelism, results, and a summary of the findings.
Syllabus
Introduction
Graph Neural Networks
Graph Processing Literature
Hybrid Parallelism
Results
Summary
Taught by
USENIX