Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Adversary Instantiation - Lower Bounds for Differentially Private Machine Learning

IEEE via YouTube

Overview

Explore the challenges and limitations of differentially private machine learning in this 15-minute IEEE presentation. Delve into the concept of adversary instantiation and its implications for establishing lower bounds in privacy-preserving ML algorithms. Learn about the non-private nature of traditional machine learning, the integration of differential privacy, and the importance of calculating epsilon. Focus on Differentially Private Stochastic Gradient Descent (DPSGD) and examine key topics such as membership inference, worst-case scenarios, intermediate model access, and adaptive distinguishers. Gain insights into gradient poisoning attacks and their impact on privacy guarantees in machine learning systems.

Syllabus

Intro
Machine Learning Is Not Private
Machine learning with Differential Privacy
We want to calculate the epsilon.
We Focus on DPSGD!
Membership inference
Worst-Case Example
Intermediate Model Access
Adaptive Intermediate Model Acce Distinguisher
Gradient Poisoning Attack

Taught by

IEEE Symposium on Security and Privacy

Reviews

Start your review of Adversary Instantiation - Lower Bounds for Differentially Private Machine Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.