Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a groundbreaking 15-minute conference talk from USENIX's NSDI '23 that introduces TACCL, an innovative tool for optimizing machine learning model training across multiple GPUs and servers. Delve into the challenges of efficient collective communication in distributed training environments and discover how TACCL leverages a novel communication sketch abstraction to guide algorithm synthesis. Learn about TACCL's ability to generate optimized algorithms for various hardware configurations and communication collectives, significantly outperforming existing solutions like the Nvidia Collective Communication Library. Gain insights into the tool's impact on speeding up end-to-end training of popular models such as Transformer-XL and BERT, with impressive performance improvements ranging from 11% to 2.3x for different batch sizes.
Syllabus
NSDI '23 - TACCL: Guiding Collective Algorithm Synthesis using Communication Sketches
Taught by
USENIX