Overview
Explore the concept of differentially private model publishing for deep learning in this IEEE conference talk. Delve into techniques for optimizing privacy loss and model accuracy in neural networks trained on sensitive data. Learn about concentrated differential privacy, refined privacy loss analysis, and dynamic privacy budget allocation. Understand the challenges and solutions for protecting individual privacy when sharing pre-trained models. Examine the implementation of these methods through extensive experiments, demonstrating improved privacy loss accounting, training efficiency, and model quality under given privacy constraints.
Syllabus
Intro
Deep Learning
Model Publishing
Differential Privacy
Composition Theorem
Multistep Machine Learning
Stochastic Gradient Descent
Technical Challenge
Summary
Minibatch
Random shuffling
Zero concentrated differential privacy
Random sampling preserves privacy
Smarter budget allocation
SGD graph
Budget Allocation
Presentation Summary
Questions
Taught by
IEEE Symposium on Security and Privacy