Overview
Syllabus
Intro
Privacy and Learning
Privacy Preserving Learning
Stochastic Optimization
Private Stochastic Convex Optimization
Typical Strategy 1: DP-SGD
Typical Strategy 2: Bespoke Analysis
Two techniques summary
High-level result
Outline of Strategy
Key Ingredient 1: Online (Linear) Optimization/Learning
Online Linear Optimization
Key Ingredient 2: Online to Batch Conversion
Straw man algorithm: Gaussian mechanism + online-to-batch
Key Ingredient 2: Anytime Online-to-Batch Conversion
Important Property of Anytime Online-to-Batch
Anytime vs Classic Sensitivity
Gradient as sum of gradient differences
Our actual strategy
Final Ingredient: Tree Aggregation
Final Algorithm
Loose Ends
Unpacking the bound
Applications: Adaptivity
Applications: Parameter-free/Comparator Adaptive
Fine Print, Open problems
Taught by
Google TechTalks