Overview
Explore a comprehensive methodology for unifying design space exploration and model compression in deep learning accelerators for TinyML applications. Delve into the challenges of deploying Deep Learning models on resource-constrained embedded devices and learn about SuperSlash, an innovative solution that combines Design Space Exploration (DSE) and Model Compression techniques. Discover how SuperSlash estimates off-chip memory access volume overhead, evaluates data reuse strategies, and implements layer fusion to optimize performance. Gain insights into the pruning process guided by a ranking function based on explored off-chip memory access costs. Examine the application of this technique to fit large DNN models on accelerators with limited computational resources, using examples such as MobileNet V1. Engage with a detailed analysis of the extended design space, multilayer fusion, and the impact of these strategies on TinyML implementations.
Syllabus
Introduction
Strategic Partners
Welcome
Agenda
Deep Neural Networks
Image Classification Networks
Hardware accelerators
Motivation Analysis
Model Pruning
Layer Fusion
Methodology
Results
Extended Design Space
Mobilenet V1
Conclusion
Questions
Multilayer Fusion
Crowd Sponsors
Taught by
tinyML