Overview
Explore a conference talk examining the intersection of security and privacy in deep learning models, focusing on membership inference attacks against adversarially robust models. Delve into the potential vulnerabilities created by adversarial defense methods, particularly adversarial training and provable defense, when subjected to privacy attacks. Analyze empirical findings that reveal increased susceptibility to membership inference in adversarially trained models, correlating directly with model robustness. Investigate the trade-offs between privacy, security, and prediction accuracy in these defense approaches, highlighting the challenges in achieving all three simultaneously. Gain insights into the complex relationships between evasion attack mitigation and privacy preservation in deep learning systems.
Syllabus
Introduction
Security Attacks
Privacy Attacks
Nanochip Influence
Training Algorithm
Setting
Security and Privacy
Example
Training Methods
Results
Privacy Performance
Probability Difference
Summary
Things of the talk
Taught by
IEEE Symposium on Security and Privacy