Overview
Explore the challenges and limitations of differentially private machine learning in this 15-minute IEEE presentation. Delve into the concept of adversary instantiation and its implications for establishing lower bounds in privacy-preserving ML algorithms. Learn about the non-private nature of traditional machine learning, the integration of differential privacy, and the importance of calculating epsilon. Focus on Differentially Private Stochastic Gradient Descent (DPSGD) and examine key topics such as membership inference, worst-case scenarios, intermediate model access, and adaptive distinguishers. Gain insights into gradient poisoning attacks and their impact on privacy guarantees in machine learning systems.
Syllabus
Intro
Machine Learning Is Not Private
Machine learning with Differential Privacy
We want to calculate the epsilon.
We Focus on DPSGD!
Membership inference
Worst-Case Example
Intermediate Model Access
Adaptive Intermediate Model Acce Distinguisher
Gradient Poisoning Attack
Taught by
IEEE Symposium on Security and Privacy