Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

A Sound Mind in a Vulnerable Body - Practical Hardware Attacks on Deep Learning

USENIX Enigma Conference via YouTube

Overview

Explore practical hardware attacks on deep learning systems in this USENIX Enigma Conference talk. Delve into the vulnerabilities of machine learning models running on hardware, examining fault-injection and side-channel attacks. Learn how flipping a single bit in a deep neural network's memory representation can drastically degrade prediction accuracy, and discover how cache side-channel attacks can reverse-engineer proprietary DNN architecture details. Gain insights into the under-studied topic of ML vulnerability to hardware attacks, and understand the need for additional ML-level defenses that account for robust properties. Consider the implications of these findings on the security of machine learning systems and the importance of addressing both the "soundness of mind" and the "vulnerable body" in ML security research.

Syllabus

Intro
Recent Work on Secure Machine Learning
Conventional View on ML Models' Robustness
We Propose A New Perspective!
Hardware Attacks Can Break Mathematically-Proven Guarantees
(Weak) Hardware Attacks Can Be Exploited in the Cloud
Prior Work's Perspective on a Model's Robustness
The Worst-Case Perturbation
Threat Model - Single-Bit Adversaries
Evaluate the Weakest Attacker with Multiple Bit-flips
Our Attack: Reconstruction of DNN Architectures from the Trace
We Can Identify the Layers Accessed While Computing
Solution: Generate All Candidate Architectures
Solution: Eliminate incompatible Candidates

Taught by

USENIX Enigma Conference

Reviews

Start your review of A Sound Mind in a Vulnerable Body - Practical Hardware Attacks on Deep Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.