Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Exploiting Forward-Forward Based Algorithm for Training on Device - TinyML Implementation

EDGE AI FOUNDATION via YouTube

Overview

Watch a technical conference talk exploring the implementation of Forward-Forward (FF) algorithm for on-device training in TinyML applications. Learn how this novel training approach serves as an alternative to traditional backpropagation, specifically designed for resource-constrained microcontroller units (MCUs). Discover the advantages of on-device training including enhanced privacy and reduced latency, while understanding how FF algorithm addresses the challenges of limited memory, energy and computing power in embedded systems. Examine how the algorithm splits neural network architecture into individually trained layers, eliminating the need to store activities and gradients. Gain insights into the mathematical foundations of FF, including its inspiration from Boltzmann machines and noise contrastive estimation, and understand how it calculates "goodness" metrics through squared activities of neural network layers.

Syllabus

tinyML EMEA - Marco Lattuada: Exploiting forward-forward based algorithm for training on device

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of Exploiting Forward-Forward Based Algorithm for Training on Device - TinyML Implementation

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.