Overview
Save Big on Coursera Plus. 7,000+ courses at $160 off. Limited Time Only!
Watch an 11-minute conference presentation from USENIX Security '24 exploring a novel information theory approach to protect against data reconstruction attacks in Federated Learning systems. Learn how researchers from Tsinghua University and Beijing Institute of Technology developed a channel model to quantify and constrain information leakage during the federated learning process. Discover how their proposed algorithms limit transmitted information during local training rounds while maintaining model accuracy and training efficiency. Understand how their theoretical framework can enhance existing privacy techniques like Differential Privacy to provide stronger guarantees against reconstruction attacks. See the experimental validation of their methods using real-world datasets and their effectiveness in defending against privacy threats in federated learning environments.
Syllabus
USENIX Security '24 - Defending Against Data Reconstruction Attacks in Federated Learning: An...
Taught by
USENIX