Explore a 15-minute conference talk from OSDI '23 that delves into a systematic approach for effectively scheduling computational graphs of deep neural networks (DNNs) on domain-specific architecture (DSA) platforms. Learn how this innovative method addresses challenges in existing approaches by fully considering hardware architecture when partitioning computational graphs. Discover how the presented technique produces larger but fewer kernels, converts off-core data movements into on-core data exchanges, and better utilizes DSA memory hierarchy. Gain insights into the exploitation of imbalanced memory usage distribution across DNN network architecture and the implementation of across-layer instruction scheduling. Examine the performance results of seven DNN inference models on a DSA platform, comparing the proposed approach to TVM, AStitch, and vendor-crafted implementations. Additionally, investigate a case study on GPU that demonstrates the effectiveness of generating kernels for the proposed sub-graphs compared to CUTLASS with and without convolution fusion.
Overview
Syllabus
OSDI '23 - Effectively Scheduling Computational Graphs of Deep Neural Networks toward Their...
Taught by
USENIX