Overview
Explore a 15-minute research symposium presentation that delves into specialized training methods for neural networks designed to run on approximate hardware. Learn how approximate computing can significantly benefit deep learning applications, particularly for power-constrained, battery-operated devices. Discover innovative approaches presented by UCLA PhD student Tianmu Li that demonstrate the necessity of hardware-specific training methods and how to achieve up to 18x faster training processes. Gain insights into maximizing the potential of approximate computing in deep learning applications, addressing the current limitations in training methodologies for this promising technology.
Syllabus
tinyML Research Symposium: Training Neural Networks for Execution on Approximate Hardware
Taught by
EDGE AI FOUNDATION