Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Explore the value of collaboration in convex machine learning with differential privacy in this IEEE conference talk. Delve into the application of machine learning to distributed private data owned by multiple entities, utilizing noisy, differentially-private gradients to minimize fitness costs through stochastic gradient descent. Examine the trade-off between privacy and utility in machine learning by quantifying model quality as a function of privacy budget and dataset size. Discover how to predict collaboration outcomes among privacy-aware data owners before executing computationally-expensive algorithms. Learn about the inverse relationship between model fitness differences and dataset size and privacy budget. Validate performance predictions with practical applications in financial datasets, including interest rate determination for loans using regression and credit card fraud detection using support vector machines. Gain insights into training with differentially private gradients, the convergence of learning algorithms, and the future of privacy-aware machine learning.
Syllabus
Learning on mutiple datasets
State-of-the-art
Training ML models
Learning on private datasets
Training with DP gradients for general convex fitness functions Slower decaying learning rate
Value of collaboration
Experiment with loan data
Convergence of learning algorithm
Prediction vs reality
Conclusions and future work
Taught by
IEEE Symposium on Security and Privacy