Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical privacy concerns in machine learning models through this 23-minute conference talk from USENIX Enigma 2022. Delve into Nicholas Carlini's research at Google, uncovering how current models can leak personally-identifiable information from training datasets. Examine the case study of GPT-2, where up to 5% of output is directly copied from training data. Learn about the challenges in preventing data leakage, the ineffectiveness of ad-hoc privacy solutions, and the trade-offs of using differentially private gradient descent. Gain insights into potential research directions and practical techniques for testing model memorization, equipping both researchers and practitioners with valuable knowledge to address this pressing issue in the field of machine learning.
Syllabus
THE ADVANCED COMPUTING SYSTEMS ASSOCIATION
Do models leak training data?
Act I: Extracting Training Data
A New Attack: : Training Data Extraction
1. Generate a lot of data 2. Predict membership
Evaluation
Up to 5% of the output of language models is verbatim copied from the training dataset
Case study: GPT-2
Act II: Ad-hoc privacy isn't
Act III: Whatever can we do?
3. Use differential privacy
Questions?
Taught by
USENIX Enigma Conference