Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

USENIX via YouTube

Overview

Explore a 14-minute conference talk from USENIX Security '22 examining novel model inversion attribute inference attacks on classification models. Delve into the potential privacy risks associated with machine learning technologies in sensitive domains. Learn about confidence score-based and label-only model inversion attacks that outperform existing methods. Understand how these attacks can infer sensitive attributes from training data using only black-box access to the target model. Examine the evaluation of these attacks on decision tree and deep neural network models trained on real datasets. Discover the concept of disparate vulnerability, where specific groups in the training dataset may be more susceptible to model inversion attacks. Gain insights into the implications for privacy and security in machine learning applications.

Syllabus

Intro
What is a Model Inversion Attack?
Model Inversion Attack Types
Model Inversion- Sensitive Attribute Inference
Existing Attacks and Defenses
LOMIA Intuition
LOMIA Attack Model Training
Experiment Setup
Attack Results
Disparate Vulnerability of Model Inversion
Conclusion

Taught by

USENIX

Reviews

Start your review of Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.