Differentially Private Online-to-Batch Conversion for Stochastic Optimization

Differentially Private Online-to-Batch Conversion for Stochastic Optimization

Google TechTalks via YouTube Direct link

Intro

1 of 26

1 of 26

Intro

Class Central Classrooms beta

YouTube videos curated by Class Central.

Classroom Contents

Differentially Private Online-to-Batch Conversion for Stochastic Optimization

Automatically move to the next video in the Classroom when playback concludes

  1. 1 Intro
  2. 2 Privacy and Learning
  3. 3 Privacy Preserving Learning
  4. 4 Stochastic Optimization
  5. 5 Private Stochastic Convex Optimization
  6. 6 Typical Strategy 1: DP-SGD
  7. 7 Typical Strategy 2: Bespoke Analysis
  8. 8 Two techniques summary
  9. 9 High-level result
  10. 10 Outline of Strategy
  11. 11 Key Ingredient 1: Online (Linear) Optimization/Learning
  12. 12 Online Linear Optimization
  13. 13 Key Ingredient 2: Online to Batch Conversion
  14. 14 Straw man algorithm: Gaussian mechanism + online-to-batch
  15. 15 Key Ingredient 2: Anytime Online-to-Batch Conversion
  16. 16 Important Property of Anytime Online-to-Batch
  17. 17 Anytime vs Classic Sensitivity
  18. 18 Gradient as sum of gradient differences
  19. 19 Our actual strategy
  20. 20 Final Ingredient: Tree Aggregation
  21. 21 Final Algorithm
  22. 22 Loose Ends
  23. 23 Unpacking the bound
  24. 24 Applications: Adaptivity
  25. 25 Applications: Parameter-free/Comparator Adaptive
  26. 26 Fine Print, Open problems

Never Stop Learning.

Get personalized course recommendations, track subjects and courses with reminders, and more.

Someone learning on their laptop while sitting on the floor.