Overview
Join Professor Philip Leong, CTO of CruxML Pty and Computer Systems expert from the University of Sydney, for a technical talk exploring Block Minifloat (BM) arithmetic and its applications in deep learning. Discover how this parameterized minifloat format optimizes low-precision deep learning applications by introducing an additional exponent bias for enhanced block range control. Learn about practical implementations of 4-8 bit precision inference, training, and transfer learning techniques that achieve comparable accuracy to traditional floating-point representations. Gain valuable insights into advancing deep neural networks through innovative arithmetic approaches that maintain performance while reducing computational demands.
Syllabus
tinyML Talks: Low Precision Inference and Training for Deep Neural Networks
Taught by
EDGE AI FOUNDATION