Class Central is learner-supported. When you buy through links on our site, we may earn an affiliate commission.

YouTube

When Machine Learning Isn't Private

USENIX Enigma Conference via YouTube

Overview

Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the critical privacy concerns in machine learning models through this 23-minute conference talk from USENIX Enigma 2022. Delve into Nicholas Carlini's research at Google, uncovering how current models can leak personally-identifiable information from training datasets. Examine the case study of GPT-2, where up to 5% of output is directly copied from training data. Learn about the challenges in preventing data leakage, the ineffectiveness of ad-hoc privacy solutions, and the trade-offs of using differentially private gradient descent. Gain insights into potential research directions and practical techniques for testing model memorization, equipping both researchers and practitioners with valuable knowledge to address this pressing issue in the field of machine learning.

Syllabus

THE ADVANCED COMPUTING SYSTEMS ASSOCIATION
Do models leak training data?
Act I: Extracting Training Data
A New Attack: : Training Data Extraction
1. Generate a lot of data 2. Predict membership
Evaluation
Up to 5% of the output of language models is verbatim copied from the training dataset
Case study: GPT-2
Act II: Ad-hoc privacy isn't
Act III: Whatever can we do?
3. Use differential privacy
Questions?

Taught by

USENIX Enigma Conference

Reviews

Start your review of When Machine Learning Isn't Private

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.