Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Low Precision Inference and Training for Deep Neural Networks

EDGE AI FOUNDATION via YouTube

Overview

Join Professor Philip Leong, CTO of CruxML Pty and Computer Systems expert from the University of Sydney, for a technical talk exploring Block Minifloat (BM) arithmetic and its applications in deep learning. Discover how this parameterized minifloat format optimizes low-precision deep learning applications by introducing an additional exponent bias for enhanced block range control. Learn about practical implementations of 4-8 bit precision inference, training, and transfer learning techniques that achieve comparable accuracy to traditional floating-point representations. Gain valuable insights into advancing deep neural networks through innovative arithmetic approaches that maintain performance while reducing computational demands.

Syllabus

tinyML Talks: Low Precision Inference and Training for Deep Neural Networks

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of Low Precision Inference and Training for Deep Neural Networks

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.