Overview
Watch a research presentation from USENIX Security '24 exploring how differentially private stochastic gradient descent (DP-SGD) provides stronger privacy guarantees than previously thought for many datapoints in common benchmark datasets. Learn about a novel per-instance privacy analysis that demonstrates how points with similar neighbors in datasets enjoy better data-dependent privacy protection compared to outliers. Discover how researchers developed a new composition theorem to analyze entire training runs, formally proving that DP-SGD leaks significantly less information than indicated by current data-independent guarantees when training on standard benchmarks. Understand the implications for privacy attacks and how they may fail against many datapoints without sufficient adversarial control over training datasets.
Syllabus
USENIX Security '24 - Gradients Look Alike: Sensitivity is Often Overestimated in DP-SGD
Taught by
USENIX