Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Twofold Sparsity: Joint Bit and Network-level Sparse Deep Neural Networks for Energy-efficient RRAM Computing

EDGE AI FOUNDATION via YouTube

Overview

Watch a technical talk exploring an innovative approach to deep neural network optimization through Twofold Sparsity - a joint bit- and network-level sparsity method designed for energy-efficient Compute-in-Memory (CIM) architecture. Learn how this method addresses the challenges of implementing AI on edge devices by combining network sparsification using Linear Feedback Shift Register masks with bit-level sparsity techniques. Discover how this approach achieves significant energy efficiency improvements ranging from 2.2x to 14x compared to traditional 8-bit networks, making it possible to run sophisticated deep learning models on power-constrained edge devices. Gain insights into how this solution overcomes the limitations of traditional Von Neumann architecture and enables more efficient on-device inference for AI-powered mobile applications.

Syllabus

tinyML Talks: Twofold Sparsity: Joint Bit- and Network-level Sparse Deep Neural Network for...

Taught by

EDGE AI FOUNDATION

Reviews

Start your review of Twofold Sparsity: Joint Bit and Network-level Sparse Deep Neural Networks for Energy-efficient RRAM Computing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.