Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Memory-Efficient, Limb Position-Aware Hand Gesture Recognition Using Hyperdimensional Computing

tinyML via YouTube

Overview

Explore cutting-edge research on memory-efficient, limb position-aware hand gesture recognition using hyperdimensional computing in this 20-minute talk from the tinyML Research Symposium 2021. Delve into the innovative approach presented by Andy Zhou, a PhD student from the University of California Berkeley, addressing reliability issues in electromyogram (EMG) pattern recognition caused by limb position changes. Learn about the dual-stage classification method and its implementation challenges in wearable devices with limited resources. Discover how sensor fusion of accelerometer and EMG signals using hyperdimensional computing models can emulate dual-stage classification efficiently. Examine two methods of encoding accelerometer features for retrieving position-specific parameters from multiple models stored in superposition. Gain insights into the validation process on a dataset of 13 gestures in 8 limb positions, resulting in a classification accuracy of up to 94.34%. Understand how this approach achieves significant improvements while maintaining a minimal memory footprint compared to traditional dual-stage classification architectures.

Syllabus

Introduction
Hand Gesture Recognition
Limb Position Change
Limb Position Training
The Big Question
Normal Superposition
Similarities
Dual Stage Architecture
ContextBased Orthogonalization
ContextBased Superposition
Results
Continuous Item Memory
Summary
Questions
Sponsors

Taught by

tinyML

Reviews

Start your review of Memory-Efficient, Limb Position-Aware Hand Gesture Recognition Using Hyperdimensional Computing

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.