Overview
Explore a 14-minute conference talk from USENIX Security '22 examining novel model inversion attribute inference attacks on classification models. Delve into the potential privacy risks associated with machine learning technologies in sensitive domains. Learn about confidence score-based and label-only model inversion attacks that outperform existing methods. Understand how these attacks can infer sensitive attributes from training data using only black-box access to the target model. Examine the evaluation of these attacks on decision tree and deep neural network models trained on real datasets. Discover the concept of disparate vulnerability, where specific groups in the training dataset may be more susceptible to model inversion attacks. Gain insights into the implications for privacy and security in machine learning applications.
Syllabus
Intro
What is a Model Inversion Attack?
Model Inversion Attack Types
Model Inversion- Sensitive Attribute Inference
Existing Attacks and Defenses
LOMIA Intuition
LOMIA Attack Model Training
Experiment Setup
Attack Results
Disparate Vulnerability of Model Inversion
Conclusion
Taught by
USENIX