Explore a 20-minute conference talk from OOPSLA2 2023 that introduces StructTensor, a groundbreaking framework for symbolically computing tensor structure at compile time. Delve into the Structured Tensor Unified Representation (STUR), an intermediate language designed to capture tensor computations along with their sparsity and redundancy structures. Learn how this approach bridges the gap between dense tensor algebra specialization and sparse tensor algorithmic efficiency. Discover the mathematical foundations of lossless tensor computations and how they ensure the soundness of symbolic structure computation and related optimizations. Examine experimental results demonstrating StructTensor's performance advantages over state-of-the-art frameworks for both dense and sparse tensor algebra across various workloads and structures.
Overview
Syllabus
[OOPSLA23] Compiling Structured Tensor Algebra
Taught by
ACM SIGPLAN