Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

Auditing Data Privacy for Machine Learning

USENIX Enigma Conference via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical issue of data privacy in machine learning through this 18-minute conference talk from USENIX Enigma 2022. Delve into the risks posed by large machine learning models that memorize significant amounts of individual data from their training sets. Learn about inference attacks, particularly membership inference attacks, and their role in measuring information leakage from models. Examine real-world examples from major tech companies and various sensitive datasets to understand the privacy implications. Discover the importance of auditing tools like ML Privacy Meter in assessing and mitigating privacy risks. Gain insights into the differences between privacy and confidentiality, the vulnerabilities of models to inference attacks, and methodologies for quantifying privacy risk. Understand the relevance of these concepts to ML engineers, policymakers, and researchers in developing privacy-conscious machine learning systems.

Syllabus

Intro
Main Takeaways . There is a difference between confidentiality and privacy
Privacy Regulations
Indirect Privacy Risks in Machine Learning
Machine Learning as a Service Platforms
Large Language Models
Federated Learning Algorithms
Membership Inference Attack
Al Regulations and Guidelines
Example: Language Generative Model
Examples of Vulnerable Training Data
Example: Image Classification Tasks
Auditing Data Privacy for Machine Learning

Taught by

USENIX Enigma Conference

Reviews

Start your review of Auditing Data Privacy for Machine Learning

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.