Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore a comprehensive privacy analysis of deep learning in this 17-minute IEEE conference talk. Delve into the susceptibility of deep neural networks to inference attacks and examine white-box inference techniques for both centralized and federated learning models. Discover novel membership inference attacks that exploit vulnerabilities in stochastic gradient descent algorithms. Investigate why deep learning models may leak training data information and learn how even well-generalized models can be vulnerable to white-box attacks. Analyze privacy risks in federated learning settings, including active membership inference attacks by adversarial participants. Gain insights into experimental setups, attacks on pretrained models, and the implications for privacy in deep learning systems.
Syllabus
Intro
Deep learning Tasks
Privacy Threats
Membership Inference
Training a Model
Gradients Leak Information
Different Learning/Attack Settings
Active Attack on Federated Learning
Active Attacks in Federated Model
Fully Trained Model
Central Attacker in Federated Model
Local Attacker in Federated Learning
Score function
Experimental Setup
Pretrained Models Attacks
Federated Attacks
Conclusions
Taught by
IEEE Symposium on Security and Privacy